Tag: Advanced Game Tech

GPU Gems – Animation in the “Dawn” Demo

4.1 Introduction

 
 

“Dawn” is a demonstration(示范) that was created by NVIDIA Corporation to introduce the GeForce FX product line and illustrate how a high-level language (such as HLSL or Cg) could be used to create a realistic human character. The vertex shaders deform a high-resolution mesh through indexed skinning and morph targets, and they provide setup for the lighting model used in the fragment shaders. The skin and wing fragment shaders offer both range and detail that could not have been achieved before the introduction of advanced programmable graphics hardware. See Figure 4-1.

【Dawn是Nvidia新产品中怎样将HLSL应用于真人角色的示范,主要涉及Vertex shader用于morphing, fragment shader用于光照。】

 
 


Figure 4-1 A Screen Capture of the Real-Time Dawn

 
 

This chapter discusses how programmable graphics hardware was used to accelerate the animation of the Dawn character in the demo.

【这里讲的就是如何使用图形硬件编程来加速Draw的角色动画。】

 
 

 
 

 
 

4.2 Mesh Animation

 
 

Traditionally, mesh animation has been prohibitively expensive for complex meshes because it was performed on the CPU, which was already burdened with physical simulation, artificial intelligence, and other computations required by today’s applications. Newer graphics hardware has replaced the traditional fixed-function pipeline with programmable vertex and fragment shaders, and it can now alleviate some of that burden from the CPU.

【传统的网格动画开销非常贵因为局限于CPU顶点计算,而CPU还承担其他的大量的工作,新的图形硬件带来的vertex/fragment shader可以分担部分CPU工作。】

 
 

Sometimes it is still necessary to perform such operations on the CPU. Many stencil-based shadow volume techniques must traverse the transformed mesh in order to find the silhouette edges, and the generation of the dynamic shadow frustum is often best done on the CPU (see Chapter 9, “Efficient Shadow Volume Rendering”). In scenes where the character is drawn multiple times per frame into shadow buffers, glow buffers, and other such temporary surfaces, it may be better to perform the deformations on the CPU if the application becomes vertex-limited. Deciding whether to perform mesh deformations on the CPU or on the GPU should be done on a per-application or even on a per-object basis.

【有时候还是需要将网格动画计算放在CPU执行,因为像体阴影需要实时去找到阴影遮挡关系来计算阴影,这些工作是在CPu上面做的。当一个character需要很多次的去渲染到模版/texture的时候就比较不适合使用GPU运算。】

 
 

The modeling, texturing, and animation of the Dawn character were done primarily in Alias Systems’ Maya package. We therefore based our mesh animation methods on the tool set the software provides. We have since created a similar demo (“Dusk,” used to launch the GeForce FX 5900) in discreet’s 3ds max package, using the same techniques; these methods are common to a variety of modeling packages and not tied to any single workflow. The methods used in these two demos are (indexed) skinning, where vertices are influenced by a weighted array of matrices, and weighted morph targets, used to drive the emotions on Dawn’s face.

【dawn美术资源来源于maya,用于这里的morphing demo】

 
 

 
 

4.3 Morph Targets

 
 

Using morph targets is a common way to represent complex mesh deformation, and the NVIDIA demo team has created a variety of demos using this technique. The “Zoltar” demo and the “Yeah! The Movie” demo (content provided by Spellcraft Studio) started with 30 mesh interpolants per second, then removed mesh keys based on an accumulated error scheme. This allowed us to reduce the file size and the memory footprint—up to two-thirds of the original keys could be removed with little to no visible artifacts. In this type of mesh interpolation, there are only two interpolants active at any given time, and they are animated sequentially.

【morphing 常用于网格变形,nvidia也做过很多相关demo。】

 
 

Alternatively, morph targets can be used in parallel. Dawn is a standard example of how this approach can be useful. Beginning with a neutral head (27,000 triangles), our artist created 50 copies of that head and modeled them into an array of morph targets, as shown in Figure 4-2. Approximately 30 of those heads corresponded to emotions (such as happy, sad, thoughtful, and so on), and 20 more were modifiers (such as left eyebrow up, right eyebrow up, smirk, and so on). In this style of animation, the morph target weights will probably not add to 1, because you may have (0.8 * happy + 1.0 * ear_wiggle), for example—Dawn is a fairy, after all.

【另外,morph target也可是并行的。Dawn 的头包含27000个三角形,做了50个头来作为morph的target array。下图展示了一些,最终morphing的结果可以是多个morph target的加权和。】

 
 


Figure 4-2 Emotional Blend Targets (Blend Shapes)

 
 

Although such complex emotional faces could have been made entirely of blends of more elemental modifiers, our artist found it more intuitive to model the face in the pose he desired, because it is hard to model an element such as an eyebrow creasing, without seeing how the eyes, cheeks, and mouth work together. This combination also helps with hardware register limitations, described later.

【要合成复杂的表情动画还是非常难的,最终的结果要看是否自然,是否会出一些明显的错误是不被允许的。譬如眼睛突出来这样的人不可能会有的行为,要加以约束,如何处理约束后面会讲。】

 
 

 
 

4.3.1 Morph Targets in a High-Level Language

 
 

Luckily, the implementation of morph targets in HLSL or Cg is simple. Assuming that vertexIn is our structure containing per-vertex data, applying morph targets in a linear or serial fashion is easy:

【幸运的是硬件实现morpg target很简单,首先来看先后时间位置的差值做法会是如下。】

 
 

float4 position = (1.0f – interp) * vertexIn.prevPositionKey + interp * vertexIn.nextPositionKey;

 
 

In this code, interp is a constant input parameter in the shader, but prevPositionKey and nextPositionKey are the positions at the prior time and next time, respectively. When applying morph targets in parallel, we find the spatial difference between the morph target and the neutral pose, which results in a difference vector. We then weight that difference vector by a scalar. The result is that a weight of 1.0 will apply the per-vertex offsets to achieve that morph target, but each morph target can be applied separately. The application of each morph target is just a single “multiply-add” instruction:

【interp 是常数输入值,prevPositionKey/nextPositionKey 是前后时刻的位置。同一时间多个morph target做法也是类似,如下加权平均。】

 
 

// vertexIn.positionDiffN = position morph target N – neutralPosition

 
 

float4 position = neutralPosition;

position += weight0 * vertexIn.positionDiff0;

position += weight1 * vertexIn.positionDiff1;

position += weight2 * vertexIn.positionDiff2;

 
 

 
 

4.3.2 Morph Target Implementation

 
 

We wanted our morph targets to influence both the vertex position and the basis (that is, the normal, binormal, and tangent) so that they might influence the lighting performed in the fragment shader. At first it would seem that one would just execute the previous lines for position, normal, binormal, and tangent, but it is easy to run out of vertex input registers. When we wrote the “Dawn” and “Dusk” demos, the GPU could map a maximum of 16 per-vertex input attributes. The mesh must begin with the neutral position, normal, binormal, texture coordinate, bone weights, and bone indices (described later), leaving 10 inputs open for morph targets. We might have mapped the tangent as well, but we opted to take the cross product of the normal and binormal in order to save one extra input.

【我们想要morph target影响顶点位置和basis,相应的影响fragment shader的光照性能。这里要注意的是GPU寄存器的数量是有限的,除去渲染要用的寄存器剩下的就是morph可以使用的寄存器。只用normal,binormal就可以,可以叉乘得到tengent,节约寄存器。】

 
 

Because each difference vector takes one input, we might have 10 blend shapes that influence position, five blend shapes that influence position and normal, three position-normal-binormal blend shapes, or two position-normal-binormal-tangent blend shapes. We ultimately chose to have our vertex shader apply five blend shapes that modified the position and normal. The vertex shader would then orthonormalize the neutral tangent against the new normal (that is, subtract the collinear elements of the new normal from the neutral tangent and then normalize) and take the cross product for the binormal. Orthonormalization is a reasonable approximation for meshes that do not twist around the surface normal:

【每一个vector作为一个输入,通过blend都会影响到最终的position。我们最终选的方案是应用五个shape blend出最终的shape。计算新的tangent如下:】

 
 

// assumes normal is the post-morph-target result

// normalize only needed if not performed in fragment shader

 
 

float3 tangent = vertexIn.neutralTangent – dot(vertexIn.neutralTangent, normal) * normal;

tangent = normalize(tangent);

 
 

Thus, we had a data set with 50 morph targets, but only five could be active (that is, with weight greater than 0) at any given time. We did not wish to burden the CPU with copying data into the mesh every time a different blend shape became active, so we allocated a mesh with vertex channels for neutralPosition, neutralNormal, neutralBinormal, textureCoord, and 50 * (positionDiff, NormalDiff). On a per-frame basis, we merely changed the names of the vertex input attributes so that those that should be active became the valid inputs and those that were inactive were ignored. For each frame, we would find those five position and normal pairs and map those into the vertex shader, allowing all other vertex data to go unused.

【因此我们有了50个morph目标但是只能在同一时刻激活使用5个。我们不希望每一次做差值都需要重新拷贝这五个目标的数据,因此我们为mesh分配相关的vertex channel包括neutralPosition…信息。在每一帧的基础上,我们只是改变vertex input的属性名字来决定其是否激活,在进行计算。】

 
 

Note that the .w components of the positionDiff and normalDiff were not really storing any useful interpolants. We took advantage of this fact and stored a scalar self-occlusion term in the .w of the neutralNormal and the occlusion difference in each of the normal targets. When extracting the resulting normal, we just used the .xyz modifier to the register, which allowed us to compute a dynamic occlusion term that changed based on whether Dawn’s eyes and mouth were open or closed, without any additional instructions. This provided for a soft shadow used in the lighting of her skin (as described in detail in Chapter 3, “Skin in the ‘Dawn’ Demo”).

【positionDiff/normalDiff 的 .w 分量在差值中用不到,我们根据这个来让这个值存储遮蔽信息,这样就可以做到跟据w判读是否启用这里的.xyz,节省空间时间。】

 
 

On the content-creation side, our animator had no difficulty remaining within the limit of five active blend shapes, because he primarily animated between three or so emotional faces and then added the elemental modifiers for complexity. We separated the head mesh from the rest of the body mesh because we did not want the added work of doing the math or storing the zero difference that, say, the happy face would apply to Dawn’s elbow. The result remained seamless—despite the fact that the head was doing morph targets and skinning while the body was doing just skinning—because the outermost vertices of the face mesh were untouched by any of the emotional blend shapes. They were still modified by the skinning described next, but the weights were identical to the matching vertices in the body mesh. This ensured that no visible artifact resulted.

【在内容创建的部分,其实五个差值已经足够用来差出目标效果了。我们这里切分出头和身体,一般身体不参与这里的运算驱动。】

 
 

 
 

4.4 Skinning

 
 

Skinning is a method of mesh deformation in which each vertex of that mesh is assigned an array of matrices that act upon it along with weights (that should add up to 1.0) that describe how bound to that matrix the vertex should be. For example, vertices on the bicep may be acted upon only by the shoulder joint, but a vertex on the elbow may be 50 percent shoulder joint and 50 percent elbow joint, becoming 100 percent elbow joint for vertices beyond the curve of the elbow.

【蒙皮就是骨骼驱动网格数据,就是去定义一个mesh顶点怎样根据其骨骼权重差值得到新的位置。】

 
 

Preparing a mesh for skinning usually involves creating a neutral state for the mesh, called a bind pose. This pose keeps the arms and legs somewhat separated and avoids creases as much as possible, as shown in Figure 4-3. First, we create a transform hierarchy that matches this mesh, and then we assign matrix influences based on distance—usually with the help of animation tools, which can do this reasonably well. Almost always, the result must be massaged to handle problems around shoulders, elbows, hips, and the like. This skeleton can then be animated through a variety of techniques. We used a combination of key-frame animation, inverse kinematics, and motion capture, as supported in our content-creation tool.

【准备好一些bind pose,就是预定义的一些关键帧,这些关键帧就是人为的去除一些不自然的情况。然后的做法就是上述的蒙皮得到变形网格。】

 
 


Figure 4-3 Dawn’s Bind Pose

 
 

A skinned vertex is the weighted summation of that vertex being put through its active joints, or:

【公式描述:vertex最终位置由joint的加权乘结果得到,存在矩阵乘法是因为骨骼间的继承关系。】

 
 


 
 

Conceptually, this equation takes the vertex from its neutral position into a weighted model space and back into world space for each matrix and then blends the results. The concatenated 


 matrices are stored as constant parameters, and the matrix indices and weights are passed as vertex properties. The application of four-bone skinning looks like this:

【上面的计算存在在模型空间完成计算,然后结果再应用到世界空间这一个过程。实现如下】

 
 

float4 skin(float4x4 bones[98],

float4 boneWeights0,

float4 boneIndices0)

{

float4 result = boneWeights0.x * mul(bones[boneIndices.x], position);

result = result + boneWeights0.y * mul(bones[boneIndices.y],

position);

result = result + boneWeights0.z * mul(bones[boneIndices.z],

position);

result = result + boneWeights0.w * mul(bones[boneIndices.w],

position);

return result;

}

 
 

In the “Dawn” demo, we drive a mesh of more than 180,000 triangles with a skeleton of 98 bones. We found that four matrices per vertex was more than enough to drive the body and head, so each vertex had to have four bone indices and four bone weights stored as vertex input attributes (the last two of the 16 xyzw vertex registers mentioned in Section 4.3.2). We sorted bone weights and bone indices so that we could rewrite the vertex shader to artificially truncate the number of bones acting on the vertex if we required higher vertex performance. Note that if you do this, you must also rescale the active bone weights so that they continue to add up to 1.

【在 Dawn 的例子中,我们的网格超过 180000 三角形, 骨骼有 98 根。 我们发现一个顶点被四根骨头驱动就已经足够了,这里就是这么应用的,在这里要注意的一点是要保证权重合为一。】

 
 

4.4.1 Accumulated Matrix Skinning

 
 

When skinning, one must apply the matrix and its bind pose inverse not only to the position, but also to the normal, binormal, and tangent for lighting to be correct. If your hierarchy cannot assume that scales are the same across x, y, and z, then you must apply the inverse transpose of this concatenated matrix. If scales are uniform, then the inverse is the transpose, so the matrix remains unchanged. Nonuniform scales create problems in a variety of areas, so our engine does not permit them.

【skin的时候,我们不仅仅要对pose应用matrix, 其他信息一样要这么做。 一定要注意 uniform scale是必要的。】

 
 

If we call the skin function from the previous code, we must call mul for each matrix for each vertex property. In current hardware, multiplying a point by a matrix is implemented as four dot products and three adds, and vector-multiply is three dot products and two adds. Thus, four-bone skinning of position, normal, binormal, and tangent results in:

【统计一下四骨头驱动那些信息的计算量:一个顶点乘上矩阵就是下面第一个小括号的计算量,再乘上四个顶点共88条指令。】

 
 


 
 

An unintuitive technique that creates the sum of the weighted matrices can be trivially implemented in HLSL or Cg as follows:

【GPU处理矩阵运算:】

 
 

float4x4 accumulate_skin(float4x4 bones[98],

 

float4 boneWeights0,

float4 boneIndices0)

{

float4x4 result = boneWeights0.x * bones[boneIndices0.x];

result = result + boneWeights0.y * bones[boneIndices0.y];

result = result + boneWeights0.z * bones[boneIndices0.z];

result = result + boneWeights0.w * bones[boneIndices0.w];

return result;

}

 
 

Although this technique does burn instructions to build the accumulated matrix (16 multiplies and 12 adds), it now takes only a single matrix multiply to skin a point or vector. Skinning the same properties as before costs:

【这样可以减少总数的指令】

 
 


 
 

 
 

4.5 Conclusion

 
 

It is almost always beneficial to offload mesh animation from the CPU and take advantage of the programmable vertex pipeline offered by modern graphics hardware. Having seen the implementation of skinning and morph targets using shaders, however, it is clear that the inner loops are quite easy to implement using Streaming SIMD Extensions (SSE) instructions and the like, and that in those few cases where it is desirable to remain on the CPU, these same techniques work well.

 
 

In the case of the “Dawn” demo, morph targets were used to drive only the expressions on the head. If we had had more time, we would have used morph targets all over the body to solve problems with simple skinning. Even a well-skinned mesh has the problem that elbows, knees, and other joints lose volume when rotated. This is because the mesh bends but the joint does not get “fatter” to compensate for the pressing of flesh against flesh. A morph target or other mesh deformation applied either before or after the skinning step could provide this soft, fleshy deformation and create a more realistic result. We have done some work on reproducing the variety of mesh deformers provided in digital content-creation tools, and we look forward to applying them in the future.

【废话不翻译了。】

 
 

【这里没有很值得让人记住的技术点,最主要的贡献在于N的显卡的能力强大到如此大计算量的蒙皮人物也能跑的起来,如此复杂的avatar实际应用价值有限,GPU蒙皮的优化方案的效果理论上都达不到50%的优化,实际效果应该更加不如人意。】

 
 

4.6 References

 
 

Alias Systems. Maya 5.0 Devkit. <installation_directory>/devkit/animEngine/

 
 

Alias Systems. Maya 5.0 Documentation.

 
 

Eberly, David H. 2001. 3D Game Engine Design, pp. 356–358. Academic Press.

 
 

Gritz, Larry, Tony Apodaca, Matt Pharr, Dan Goldman, Hayden Landis, Guido Quaroni, and Rob Bredow. 2002. “RenderMan in Production.” Course 16, SIGGRAPH 2002.

 
 

Hagland, Torgeir. 2000. “A Fast and Simple Skinning Technique.” In Game Programming Gems, edited by Mark DeLoura. Charles River Media.

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

Voxel House

  • Introduction

 
 

http://www.oskarstalberg.com/game/house/Index.html

 
 


 
 

My projects typically revolve(围绕) around some central idea that I want to explore. Here, that central idea is a particular content driven approach to modular tilesets that I’ve had on my mind for a while. This project could have been created as a Python script in Maya or a node graph in Houdini. However, since I don’t want my final presentation material to be a dull narrated youtube clip set in a grey-boxed Maya scene, I created an interactive web demo instead. As a tech artist, the width of my skill set is crucial; I’m not a master artist nor a proper coder, but I’ve got a slice of both in me. I’m most comfortable in the very intersection of art and tech; of procedure and craftsmanship. A web demo is the perfect medium to display those skills.

 
 

 
 

  • Figuring out the tiles

 
 

The core concept is this: the tiles are places in the corners between blocks, not in the center of the blocks. The tiles are defined by the blocks that surround them: a tile adjacent to one block in the corner would be 1,0,0,0,0,0,0; a tile representing a straight wall would be 1,1,1,1,0,0,0,0.

 
 


 
 

Since each corner is surrounded by 8 possible blocks, each of which can be of the 2 possible states of existence or non-existence, the number of possible tiles are 2^8= 256. That is way more blocks than I want to model, so I wrote a script to figure out which of these tiles were truly unique, and which tiles were just rotations of other tiles. The script told me that I had to model 67 unique tiles – a much more manageable number.

 
 


 
 

I could have excluded flipped version of other tiles as well, which would have brought the number down even further. However, I decided to keep those so that I could make some asymmetrically tiling features. The drain pipes you see in concave corners of the building is one example of that.

 
 

 
 

  • Boolean setup in Maya

 
 

Being the tech artist that I am, I often spend more time on my workflow than on my actual work. Even accounting for rotational permutations(排列), this project still involved a large amount of 3D meshes to manually create and keep track of. The modular nature of the project also made it important to continuously see and evaluate the models in their proper context outside of Maya. The export process had to be quick and easy and I decided to write a small python script to help me out.

【这里有巨大的工作量,即使可以旋转,依然有大量的组合,目标是使得各个连接都可以有很好的效果。这个过程要足够的快速和容易,使用python脚本解决。理解就是脚本的作用就是来验证美术做出来的效果是ok可用的。】

 
 

First, the script merges all my meshes into one piece. Second, a bounding box for each tile proceeds to cut out its particular slice of this merged mesh using Maya’s boolean operation. All the cutout pieces inherit the name and transform from their bounding box and are exported together as an fbx.

【把所有相关mesh合并成piece,使用maya布尔操作进入切出每一tile的包围盒,就是包到邻居。】

 
 

Not only did this make the export process a one-button solution, it also meant that I didn’t have to keep my Maya scene that tidy. It didn’t matter what meshes were named, how they were parented or whether they were properly merged or not. I adapted my Maya script to allow several variations of the same tile type. My Unity script then chose randomly from that pool of variation where it existed. In the image below, you can see that some of the bounding boxes are bigger than the others. Those are for tiles that have vertices that stretch outside their allotted volume.

 
 


 
 

 
 

  • Ambient Occulusion 环境光遮蔽

 
 

Lighting is crucial to convey 3D shapes and a good sense of space. Due to the technical limitations in the free version of Unity, I didn’t have access to either real time shadows or ssao – nor could I write my own, since free Unity does not allow render targets. The solution was found in the blocky nature of this project. Each block was made to represent a voxel in a 3D texture. While Unity does not allow me to draw render targets on the GPU, it does allow me to manipulate textures from script on the CPU. (This is of course much slower per pixel, but more than fast enough for my purposes.)

Simply sampling that pixel in the general direction of the normal gives me a decent ambient occlusion approximation.

 
 

I tried to multiply this AO on top of my unlit color texture, but the result was too dark and boring. I decided on an approach that took advantage on my newly acquired experience in 3D textures: Instead of just making pixels darker, the AO lerps the pixel towards a 3D LUT that makes it bluer and less saturated. The result gives me a great variation in hue without too harsh a variation in value. This lighting model gave me the soft and tranquil feeling I was aiming for in this project.

 
 


 
 

 
 

  • Special Pieces(特殊件)

 
 


 
 

When you launch the demo, it will auto generate a random structure for you. By design, that structure does not contain any loose or suspended blocks.

 
 

I know that a seasoned tool-user will try to break the tool straight away by seeing how it might treat these type of abnormal structures. I decided to show off by making these tiles extra special, displaying features such as arcs, passages, and pillars.

 
 


 
 


 
 

 
 

  • Floating Pieces

 
 

There is nothing in my project preventing a user from creating free-floating chunks, and that’s the way I wanted to keep it. But I also wanted to show the user that I had, indeed, thought about that possibility. My solution to this was to let the freefloating chunks slowly bob up and down. This required me to create a fun little algorithm to figure out in real time which blocks were connected to the base and which weren’t:

 
 

The base blocks each get a logical distance of 0. The other block check if any of their neighbors have a shorter logical distance than themselves; if they do, they adopt that value and add 1 to it. Thus, if you disconnect a chunk there will be nothing grounding those blocks to the 0 of the base blocks and their logical distance will quickly go through the roof. That is when they start to bob.

 
 

The slow bobbing of the floating chunks add some nice ambient animation to the scene.

 
 


 
 

 
 

  • Art Choices

 
 

Picking a style is a fun and important part of any project. The style should highlight the features relevant to a particular project. In this project, I wanted a style that would emphasize blockiness and modularity rather than hiding it.

 
 

The clear green lines outline the terraces, the walls are plain and have lines of darker brick marking each floor, the windows are evenly spaced, and the dirt at the bottom is smooth and sedimented in straight lines. Corners are heavily beveled to emphasize that the tiles fit together seamlessly. The terraces are supposed to look like cozy secret spaces where you could enjoy a slow brunch on a quiet Sunday morning. Overall, the piece is peaceful and friendly – a homage to the tranquility of bourgeois life, if you will.

 
 

 
 

  • Animation

 
 

It should be fun and responsive to interact with the piece. I created an animated effect for adding and removing blocks. The effect is a simple combination of a vertex shader that pushes the vertices out along their normals and a pixel shader that breaks up the surface over time. A nice twist is that I was able to use the 3D texture created for the AO to constrain the vertices along the edge of the effect – this is what creates the bulge along the middle seen in the picture.

 
 


 
 


 
 

 
 

 
 

  • Conclusion

 
 

The final result is like a tool, but not. It’s an interactive piece of art that runs in your browser. It can be evaluated for it’s technical aspects, it’s potential as a level editor tool, it’s shader work, it’s execution and finish, or just as a fun thing to play around with. My hope is that it can appeal to developers and laymen alike. In a way, a web demo like this is simply a mischievous way to trick people into looking at your art longer than they otherwise would.

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

Managing Transformations in Hierarchy

  • Introduction

 

One of the most fundamental aspects of 3D engine design is management of spatial relationship between objects. The most intuitive way of handling this issue is to organize objects in a tree structure (hierarchy), where each node stores its local transformation, relative to its parent.

The most common way to define the local transformation is to use a socalled TRS system, where the transformation is composed of translation, rotation, and scale. This system is very easy to use for both programmers using the engine as well as non-technical users like level designers. In this chapter we describe the theory behind such a system.

One problem with the system is decomposition of a matrix back to TRS. It turns out that this problem is often ill-defined and no robust solution exists. We present an approximate solution that works reasonably well in the majority of cases.

 

  • Theory

树结构

Keeping objects in hierarchy is a well-known concept. Every object can have a number of children and only one parent.  It can also be convenient to store and manage a list of pointers to the children so that we have fast access to them. The aforementioned structure is in fact a tree.

节点结构

We assume that a node stores its translation, rotation, and scale (TRS) that are relative to its parent. Therefore, we say these properties are local. When we move an object, we drag all its children with it. If we increase scale of the object, then all of its children will become larger too.

例:

bgt8_1_01

 

变换矩阵与RTS

对于单个节点的变换矩阵和RTS的关系

Local TRS uniquely defines a local transformation matrix M. We transform vector v in the following way:

bgt8_1_02

where S is an arbitrary scale matrix, R is an arbitrary rotation matrix, T is a translation matrix, and T is the vector matrix T is made of.

系统层次结构的变换矩阵关系

To render an object, we need to obtain its global (world) transformation by composing local transformations of all the object’s ancestors up in the hierarchy.

The composition is achieved by simply multiplying local matrices. Given a vector v0, its local matrix M0, and the local matrix M1 of v0’s parent, we can find the global position v2:

bgt8_1_03

Using vector notation for translation, we get

bgt8_1_04

这里需要注意的就是

RS != S’R’

 

Skew Problem

问题描述:

Applying a nonuniform scale (coming from object A) that follows a local rotation (objects B and C) will cause objects (B and C) to be skewed. Skew can appear during matrices composition but it becomes a problem during the decomposition, as it cannot be expressed within a single TRS node. We give an approximate solution to this issue in Section 3.2.4.

bgt8_1_05

解决方法:

Let an object have n ancestors in the hierarchy tree. Let M1,M2, · · · ,Mn be their local transformation matrices, M0 be a local transformation matrix of the considered object, and Mi = SiRiTi.

MTRSΣ = M0M1 · · ·Mn

MTR = R0T0R1T1 · · ·RnTn

TR可以很好的叠加来获得世界坐标的TR

MSΣ = MRSΣMR

here we have the skew and the scale combined. We use diagonal elements of MSΣ to get the scale, and we choose to ignore the rest that is responsible for the skew.

Scale 的话用这边算出来的对角线来表示,其他的结果丢弃采用上面的TR,这样的结果就可以避免skew.

 

父节点变化处理

In a 3D engine we often need to modify objects’ parent-children relationship.

we want to change the local transformation such that the global transformation is still the same. Obviously, that forces us to recompute local TRS values of the object whose parent we’re changing.

To get from the current local space to a new local space (parent changes, global transform stays the same), we first need to find the global transform of the object by going up in the hierarchy to the root node. Having done this we need to go down the hierarchy to which our new parent belongs.

LetM’0 be the new parent’s local transformation matrix. Let that new parent have n’ ancestors in the hierarchy tree with local transformations M’1,M’2, · · · ,M’n’, where M’i = S’iR’iT’i. The new local transformation matrix can thus be found using the following formula:

bgt8_1_06

bgt8_1_07

通过此公式就可以求出新的RTS

 

Alternative Systems

这边主要讲 Scale 处理,和skew相关

做法:除了叶节点存储x,y,z不相同的,各项异的scale值(三维向量)(nonuniform scale in last node),其他节点存储的是uniform scale值(不是三维向量,是值)这样可以有效的解决skew问题且实现简单。

 

  • Implementation

节点结构:

bgt8_1_08

Reducing Texture Memory Usage by 2-channel Color Encoding

原理:

简单地说就是贴图一般所使用到的色域都很小,因此可以利用这个特征来减少表示texture的数据量。

These single-material textures often do not exhibit large color variety and contain a limited range of hues, while using a full range of brightness resulting from highlights and dark (e.g., shadowed), regions within the material surface.

 

基本做法就是传输亮度和饱和度

The method presented here follows these observations and aims to encode any given texture into two channels: one channel preserving full luminance information and the other one dedicated to hue/saturation encoding.

 

Texture Encoding Algorithm

 

编码就是三维映射到二维的过程,就是找出一个平面,使得所有的三维顶点到该平面的距离最小来保证误差最小。

Approximating this space with two channels effectively means that we have to find a surface (two-dimensional manifold) embedded within this unit cube that lies as close as possible to the set of texels from the source texture.

bgt_7_01

 

步骤:

1. 重估颜色空间

 

sRGB颜色值转到线性颜色空间。

RGB值对亮度的贡献的非线性和不同的,因此我们要赋予RGB不同的权重。

上面两步得到新的可以线性运算的三维空间坐标

bgt_7_02

然后在此基础上去计算平面。

点到平面距离:

bgt_7_03

所有点到平面距离的平方和

bgt_7_04

通过如下计算方法计算得到。参考 estimate_image 函数和书本。

2. 算出两个base的颜色向量

 

bgt_7_05

这里很简单如图:已知道 bc1(0,1,m)bc2(1,0,n)初始化以后利用平面的信息求的。见函数

find_components() 求解。

3. 亮度编码

 

公式:

bgt_7_06

4. 饱和度编码

 

bgt_7_07

四步走:首先获得三维点在平面的投影,然后就有(0,0,0)到该投影点的向量,另外还可以计算得到两个base颜色向量,让这个投影向量用两个基本颜色向量表示。最终再通过公式求的饱和度值。

 

Decoding Algorithm

这个很简单,就是也需要两个base颜色值和亮度混合参数,然后反解得到。

bgt_7_08

 

  • 实现:

 

vec3 estimate_image(BITMAP *src) :

整个拆成 rr, gg, bb, rg, rb, gb 六个分量。首先计算整个image的六个分量的均值。

然后暴力法遍历预设好的法线取值范围(例 n.xyz 在 0 到 100)

求出误差公示最小的法线值。

 

void stamp_color_probe(BITMAP *bmp):

这个是图片颜色预处理

 

编码:

BITMAP *encode_image(BITMAP *src,vec3 n):

平面法线normalize,并找出两个基本的颜色。

然后通过这两个基本颜色构建2d位置坐标系和对应的颜色坐标系。

接着创建输出bitmap,然后对于每一个输出mipmap的像素点:

获得RGB值

应用 gamma 2.0

计算 2D 坐标系下面的坐标位置和颜色

构建饱和度 float hue = -da/(db-da+0.00000001f);(da, db是2d坐标下的颜色值)

构建亮度  float lum = sqrt(c.dot(to_scale))

编码成 hue, lum 两个分量。

 

解码

BITMAP *decode_image(BITMAP *src,const vec3 &base_a,const vec3 &base_b):

初始化目标Bitmap对于它的每一个像素点

首先获得传过来的 hue, lum

解码颜色 vec3 c = base_a + (base_b-base_a)*hue;

解码亮度 float clum = 0.2126f*c.x + 0.7152f*c.y + 0.0722f*c.z;

应用 gamma 2.0 回到 sRGB 值