Goals: Understand how 3D game objects are represented geometrically and how they are drawn. Master the mathematics of the transformation pipeline, from model space through to screen space. Review basic transformations such as scaling, rotation, and translation. Be able to move points from one coordinate space to another.
Goals: Get comfortable with the specific math operations and data types we will use in future lessons. Examine vectors, planes, and matrices and understand their roles in the transformation pipeline and in other common cases. Review dot and cross products, normalization, and matrix and vector multiplication. Learn the D3DX equivalent data types and functions for the operations discussed. Understand perspective projection and how the matrix is constructed. Learn how arbitrary fields of view can be created to model different camera settings.
Goals: Begin examining the DirectX Graphics pipeline and see how the different pieces relate to what we have already learned. Study the COM programming model to better understand the low-level processes involved when working with the DirectX API. Learn how to properly initialize the DirectX environment. Create a rendering device for output. Understand important device resources like window settings, front and back buffers, depth buffers, swap chains, and surface formats.
Goals: Use presentation parameters for device setup. Develop strategies to handle lost devices. Spend time looking at surface formats and adapter formats. Talk about the different frame buffer formats and examine basic depth buffering. Attempt to create a full configured Direct3D device. Understand presentation parameters and their role in device creation. Talk about lost devices, what they are, and how to recover when it occurs.
Goals: Learn how to render 3D objects as wireframe or solid objects. Examine how to apply various forms of shading. Learn about flexible vertex formats, triangle data, and the DrawPrimitive function. Look at core device render states used when drawing - depth buffering, lighting and shading, back face culling, etc. Talk about transformation states and learn how to pass matrices to the device for use in the transformation pipeline. Learn how to clear buffers, begin and end scene rendering, and present rendered results to the viewer.
Goals: Start to examine more optimal rendering strategies in DirectX. Get comfortable with creating, filling, and drawing vertex and index buffers. Look at indexed and non-indexed mesh rendering for static and dynamic (animated) geometry. Understand device memory pools and know which is appropriate for a given job. Examine indexed triangle strip generation and the role of degenerate triangles.
Goals: Take a more detailed look at the view transformation and its associated matrix. Create first person, third person, and spacecraft camera types. Learn how to use rendering viewports and see what role matrices play in that process. Use a camera's clipping planes (frustum) to optimize scene rendering.
Goals: Understand the fixed-function DirectX Graphics vertex lighting pipeline and its advantages/disadvantages. Examine the primary lighting (ambient/diffuse/specular/emissive) modeled in real-time games. Get comfortable with the most common light types (point/spot/directional) and see how to setup their properties. Configure the lighting pipeline to use our light sources. Learn the role of vertex normals and how to calculate them when necessary. Discuss materials and how they define a surface's interact lights in the environment.
Goals: Understand what textures are and how they are defined in memory. Understand mip maps, how they relate to anti-aliasing, memory footprint, and bandwidth. Look at the various options for loading textures from disk or memory. Learn how to set a texture for rendering. Understand the relationship between texture coordinates and addressing modes. Talk about aliasing and common artifacts and how to use filters to improve visual quality.
Goals: Learn how to configure the texture pipeline for single and multi-texturing operations. Examine texture compression as a means for reducing memory requirements and improving performance. Use transformation matrices to animate texture coordinates. Get familiar with DirectX texture and surface types and their associated utility functions.
Goals: Understand the general blending equation and the related concept of 'alpha' blending. Know where transparency data can be stored (vertices, materials, textures) and the associated pros and cons. Learn how to configure the transformation and texture pipelines to do blending operations. Use alpha testing and alpha surfaces to reject specific texels during rendering (e.g., for chain link fences). Study front-to-back sorting algorithms for transparent polygon rendering. Add colored fog to a scene using both vertex and pixel level computations. Learn the traditional equations for global fog effects: linear, exponential, and exponential squared).
Goals: Introduce the mesh containers in the D3DX library. Use scene level attribute batching and subset rendering to improve performance. Learn optimization techniques to speed up rendering on modern hardware. Look at how to import X file geometry. Learn how to construct and fill mesh internal buffers manually. Discussion cloning (copying) of mesh data and some of the features available during the process. Learn how to manage geometric level of detail (LOD) using view-independent progressive meshes. Look at how to construct and use progressive meshes and see how they work algorithmically. Examine mesh simplification and assorted other useful mesh utility functions.
Goals: Look at how to import and manage more complex 3D models and scenes. Introduce frames of reference and parent-child hierarchical relationships. Use hierarchies to build more complex scenery consisting of independent, animation-ready meshes. Study X file templates to see how scene data is stored and learn how to load custom data chunks. Examine the D3DXLoadMeshHierarchyFromX function in detail, including callback mechanisms and memory management. Understand how to traverse, transform, and render a hierarchy of meshes. Introduce a simple animation controller to prepare for the next set of topics.
Goals: Understand the fundamentals of animating game scenes. Use keyframe data to animate the hierarchies introduced previously. Learn the representations of X file animation data and and how it translates to D3DX data structures in the application. Understand how an animation controller interpolates keyframe data and how it can be controlled. Construct a custom animation set object that can be plugged into the D3DX animation system.
Goals: Learn how to use the animation mixer to blend multiple simultaneous animation tracks. Synchronize the animation timeline with events (e.g., playing sound effects or triggering code).
Goals: Learn how skinning and skeletal animation provides realistic visual results. Understand skins, bones, and skeletons and how they are constructed, animated, and rendered. Look at skinning related X file data templates and the matching game data structures. Examine software and hardware skinning. Examine non-indexed and palette-driven indexed skinning techniques. Integrate animated characters into our experimental framework.Construct a skeleton and skin model programmatically. Generate simple animated trees for demonstration purposes. Extend our lab project middle-tier to include data-driven support for animation switching and blending.
Goals: Understand broad and narrow phase collision detection algorithms. Develop a collision detection and response system based on ellipsoids. Understand the mathematics of ellipsoid space. Examine intersection algorithms for the narrow phase. Test rays against common game primitives. Test spheres against triangle interiors. Test swept spheres against the edges and vertices of triangles. Review solving quadratic equations and their role in the detection phase. Learn how to support dynamic objects in terms of detection and response.
Goals: Examine axis-aligned hierarchical spatial partitioning data structures like quadtrees, octrees, and kD-trees. Implement broad phase collision detection using spatial partitioning to improve performance. Examine hardware-friendly rendering of spatial trees. Use hierarchical frustum culling to speed up scene rendering. Use frame coherence to improve rendering performance.
Goals: Understand the Binary Space Partitioning (BSP) tree. Learn how to compile BSP node trees and use them for pixel-perfect transparent polygon sorting. Create BSP leaf trees and examine how to add solid and empty space information to our BSP tree representation. Use BSP trees to perform constructive solid geometry (CSG). Learn how to use CSG to merge geometric objects and carve shapes out of other shapes.
Goals: Understand and learn how to calculate potential visibility sets (PVS). Discuss portal generation and clipping. Examine penumbras and anti-penumbras to see how volumetric lighting techniques can be used as visibility proxies. Model the flow of light through the scene for visibility. Learn how to compress PVS information. Use PVS to efficiently render complex scenes. Learn now to avoid problems caused by illegal geometry during BSP compilation.
Goals: Understand how to use effects to manage state and organize scene rendering. Learn how to load and compile effects from files, resources, and memory buffers. Learn how to send custom data to the graphics pipeline for state management and as prelude to our coming shader discussions.
Goals: Understand shader hardware architecture and the concept of shader models. Learn how to use vertex and pixel shaders to replace fixed-function rendering techniques. See how to use HLSL with effect files to simplify shader integration into our code framework. Understand how data is passed from our application to shader programs running on the GPU. Convert our vertex lighting model to a per-pixel model that supports normal mapping. Introduce render target textures and deferred rendering.