Category: Basic Game Tech

Managing Transformations in Hierarchy

  • Introduction

 
 

One of the most fundamental aspects of 3D engine design is management of spatial relationship between objects. The most intuitive way of handling this issue is to organize objects in a tree structure (hierarchy), where each node stores its local transformation, relative to its parent.

 
 

The most common way to define the local transformation is to use a socalled TRS system, where the transformation is composed of translation, rotation, and scale. This system is very easy to use for both programmers using the engine as well as non-technical users like level designers. In this chapter we describe the theory behind such a system.

 
 

One problem with the system is decomposition of a matrix back to TRS. It turns out that this problem is often ill-defined and no robust solution exists. We present an approximate solution that works reasonably well in the majority of cases.

 
 

  • Theory

 
 

树结构

Keeping objects in hierarchy is a well-known concept. Every object can have a number of children and only one parent. It can also be convenient to store and manage a list of pointers to the children so that we have fast access to them. The aforementioned structure is in fact a tree.

 
 

节点结构

We assume that a node stores its translation, rotation, and scale (TRS) that are relative to its parent. Therefore, we say these properties are local. When we move an object, we drag all its children with it. If we increase scale of the object, then all of its children will become larger too.

 
 

例:


 
 

变换矩阵与RTS

 
 

对于单个节点的变换矩阵和RTS的关系

Local TRS uniquely defines a local transformation matrix M. We transform vector v in the following way:


where S is an arbitrary scale matrix, R is an arbitrary rotation matrix, T is a translation matrix, and T is the vector matrix T is made of.

 
 

系统层次结构的变换矩阵关系

To render an object, we need to obtain its global (world) transformation by composing local transformations of all the object’s ancestors up in the hierarchy.

The composition is achieved by simply multiplying local matrices. Given a vector v0, its local matrix M0, and the local matrix M1 of v0’s parent, we can find the global position v2:


Using vector notation for translation, we get


这里需要注意的就是

RS != S’R’

 
 

Skew Problem

 
 

问题描述:

Applying a nonuniform scale (coming from object A) that follows a local rotation (objects B and C) will cause objects (B and C) to be skewed. Skew can appear during matrices composition but it becomes a problem during the decomposition, as it cannot be expressed within a single TRS node. We give an approximate solution to this issue in Section 3.2.4.


解决方法:

Let an object have n ancestors in the hierarchy tree. Let M1,M2, · · · ,Mn be their local transformation matrices, M0 be a local transformation matrix of the considered object, and Mi = SiRiTi.

MTRSΣ = M0M1 · · ·Mn

MTR = R0T0R1T1 · · ·RnTn

TR可以很好的叠加来获得世界坐标的TR

MSΣ = MRSΣMR

here we have the skew and the scale combined. We use diagonal elements of MSΣ to get the scale, and we choose to ignore the rest that is responsible for the skew.

Scale 的话用这边算出来的对角线来表示,其他的结果丢弃采用上面的TR,这样的结果就可以避免skew.

 
 

父节点变化处理

 
 

In a 3D engine we often need to modify objects’ parent-children relationship.

we want to change the local transformation such that the global transformation is still the same. Obviously, that forces us to recompute local TRS values of the object whose parent we’re changing.

 
 

To get from the current local space to a new local space (parent changes, global transform stays the same), we first need to find the global transform of the object by going up in the hierarchy to the root node. Having done this we need to go down the hierarchy to which our new parent belongs.

 
 

LetM’0 be the new parent’s local transformation matrix. Let that new parent have n’ ancestors in the hierarchy tree with local transformations M’1,M’2, · · · ,M’n’, where M’i = S’iR’iT’i. The new local transformation matrix can thus be found using the following formula:


 
 


通过此公式就可以求出新的RTS

 
 

Alternative Systems

 
 

这边主要讲 Scale 处理,和skew相关

做法:除了叶节点存储x,y,z不相同的,各项异的scale值(三维向量)(nonuniform scale in last node),其他节点存储的是uniform scale值(不是三维向量,是值)这样可以有效的解决skew问题且实现简单。

 
 

  • Implementation

 
 

节点结构:


 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

Object Space Ambient Occlusion for Molecular Dynamics (OSAO)

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter14.html

Michael Bunnell

NVIDIA Corporation


In this chapter we describe a new technique for computing diffuse light transfer and show how it can be used to compute global illumination for animated scenes. Our technique is efficient enough when implemented on a fast GPU to calculate ambient occlusion and indirect lighting data on the fly for each rendered frame. It does not have the limitations of precomputed radiance transfer(光辐射传输) (PRT) or precomputed ambient occlusion techniques, which are limited to rigid objects that do not move relative to one another (Sloan 2002). Figure 14-1 illustrates how ambient occlusion and indirect lighting enhance environment lighting.

【这里介绍一种高效的基于GPU运算的ambient occlusion技术。这里突破了一般预计算方式的只可应用于静态对象的局限。】

bgt_4_114_ambient_occlusion_01.jpg

Figure 14-1 Adding Realism with Ambient Occlusion and Indirect Lighting

Our technique works by treating polygon meshes as a set of surface elements that can emit, transmit, or reflect light and that can shadow each other. This method is so efficient because it works without calculating the visibility of one element to another. Instead, it uses a much simpler and faster technique based on approximate shadowing to account for occluding (blocking) geometry.

【我们的技术把多边形表面看作是一组表面的单元集合,他们之间可以emit, transmit, reflect shadow,通过这样的近似可以简单快速的获得起到阻塞效果的几何的形状。】


14.1 Surface Elements

The first step in our algorithm is to convert the polygonal data to surface elements to make it easy to calculate how much one part of a surface shadows or illuminates another.

【这里算法的第一步就是将多边形数据转化成surface elements。】

Figure 14-2 illustrates the basic concept. We define a surface element as an oriented disk with a position, normal, and area. An element has a front face and a back face. Light is emitted and reflected from the front-facing side. Light is transmitted and shadows are cast from the back. We create one element per vertex of the mesh. Assuming that the vertices are defined with a position and normal already, we just need to calculate the area of each element. We calculate the area at a vertex as the sum of one-third of the area of the triangles that share the vertex (or one-fourth of the area for quads). Heron’s formula for the area of a triangle with sides of length a, b, and c is:

bgt_4_2ch14_eqn001.jpg

where s is half the perimeter of the triangle: (a + b + c)/2.

【下图展示的就是这一步的概念示意,surface element定义成圆形表面包含位置/法向/area信息。 surface包含正反面,光线从正面emit/reflect,反面形成transmit/shadow

对于多边形的每一个顶点生成一个surface element. 顶点的位置法线直接赋予surface elementarea的计算由使用到这个顶点的三角形的面积总和的三分之一,计算公式如上。】

bgt_4_314_ambient_occlusion_02.jpg

Figure 14-2 Converting a Polygonal Mesh to Elements

We store element data (position, normal, and area) in texture maps because we will be using a fragment program (that is, a pixel shader) to do all the ambient occlusion calculations. Assuming that vertex positions and normals will change for each frame, we need to be able to change the values in the texture map quickly.

One option is to keep vertex data in a texture map from the start and to do all the animation and transformation from object space to eye (or world) space with fragment programs instead of vertex programs. We can use render-to-vertex-array to create the array of vertices to be sent down the regular pipeline, and then use a simple pass-through vertex shader.

Another, less efficient option is to do the animation and transformation on the CPU and load a texture with the vertex data each frame.

【我们需要把surface elementposition/normal/area的信息存储到texture用于pixel shader. 假设顶点的位置法线是每个frame都变化的,因此我们需要快速改变texture的值。

一种可行的方案是一直保持一开始的时候的顶点信息,之后动画的变化完全由eye space/pixel shader来替代object space/vertex shader的处理,然后render to vertex array生成顶点数组,再交由正常的流水线再处理,之后就是一个简单的vertex shader可以搞定了。

另外一种低效的解决方案是在CPU上面处理动画变化生成texture的方式。】


14.2 Ambient Occlusion

Ambient occlusion is a useful technique for adding shadowing to diffuse objects lit with environment lighting. Without shadows, diffuse objects lit from many directions look flat and unrealistic. Ambient occlusion provides soft shadows by darkening surfaces that are partially visible to the environment. It involves calculating the accessibility value, which is the percentage of the hemisphere above each surface point not occluded by geometry (Landis 2002). In addition to accessibility, it is also useful to calculate the direction of least occlusion, commonly known as the bent normal. The bent normal is used in place of the regular normal when shading the surface for more accurate environment lighting.

AO解释,在对象表面生成软阴影可以有效的提高真实感。】

We can calculate the accessibility(辅助) value at each element as 1 minus the amount by which all the other elements shadow the element. We refer to the element that is shadowed as the receiver and to the element that casts the shadow as the emitter. We use an approximation based on the solid angle of an oriented disk to calculate the amount by which an emitter element shadows a receiver element. Given that A is the area of the emitter, the amount of shadow can be approximated by:

bgt_4_4ch14_eqn002.jpg

Equation 14-1 Shadow Approximation

【计算辅助值:1减去所有其他element在此的阴影。Element 作为接收者 shadowed,作为发光者造成阴影。因为发光者和接收阴影着的角度都是已知的,,我们采用上面的公式来估算,配合下面的示意图。Aemitter的面积。】

As illustrated in Figure 14-3, qE is the angle between the emitter’s normal and the vector from the emitter to the receiver. qR is the corresponding angle for the receiver element. The max(1, 4 x cos qR ) term is added to the disk solid angle formula to ignore emitters that do not lie in the hemisphere above the receiver without causing rendering artifacts for elements that lie near the horizon.

【这一段在解释变量含义】

bgt_4_514_ambient_occlusion_03.jpg

Figure 14-3 The Relationship Between Receiver and Emitter Elements

Here is the fragment program function to approximate the element-to-element occlusion:

【下面是计算函数的实现】

bgt_4_6计算机生成了可选文字: El em entShadow (Elo at3 oat oat El oats e as sme that rs quar e d, o at3 oat (1 emittuArea has already divided Ey r qrt(emi t terkrea/rSquared dot


14.2.1 The Multipass Shadowing Algorithm

We calculate the accessibility values(辅助值) in two passes.

【这里计算包含两个pass

In the first pass, we approximate the accessibility for each element by summing the fraction(分数) of the hemisphere(半球) subtended(对着) by every other element and subtracting(减法) the result from 1.

【第一个pass是根据上面的公式来近似计算每一个element的分数】

After the first pass, some elements will generally be too dark because other elements that are in shadow are themselves casting shadows. So we use a second pass to do the same calculation, but this time we multiply each form factor by the emitter element’s accessibility from the last pass.

【经过第一步会导致有些elements太暗了,原因在于存在投影的过度叠加。因此第二个pass做同样的计算,但是这里我们乘上每一个emitter elements的上一步计算出来的辅助值。】

The effect is that elements that are in shadow will cast fewer shadows on other elements, as illustrated in Figure 14-4. After the second pass, we have removed any double shadowing.

【效果如下图所示,通过第二步我们解决的是double shadowing导致的太暗的问题】

However, surfaces that are triple shadowed or more will end up being too light. We can use more passes to get a better approximation, but we can approximate the same answer by using a weighted average of the combined results of the first and second passes. Figure 14-5 shows the results after each pass, as well as a ray-traced solution for comparison. The bent normal calculation is done during the second pass. We compute the bent normal by first multiplying the normalized vector between elements and the form factor. Then we subtract this result from the original element normal.

【其实通过上面的两步还是得不到很好的结果,比如第二步只去除的是双重叠加的效果,如果是三重叠加我们还需要更进一步的 pass来去除叠加效果,这是个无底洞。 因此我们采用对第二步的结果再设置权重值的方式来获得更好的近似效果,下下图就是结果展示。】

bgt_4_714_ambient_occlusion_04.jpg

Figure 14-4 Correcting for Occlusion by Overlapping Objects

bgt_4_814_ambient_occlusion_05.jpg

Figure 14-5 Comparing Models Rendered with Our Technique to Reference Images

We calculate the occlusion result by rendering a single quad (or two triangles) so that one pixel is rendered for each surface element. The shader calculates the amount of shadow received at each element and writes it as the alpha component of the color of the pixel. The results are rendered to a texture map so the second pass can be performed with another render. In this pass, the bent normal is calculated and written as the RGB value of the color with a new shadow value that is written in the alpha component.

【每一个pass,一个surface element当作一个pixel来处理,这样shader将每个element计算得到的阴影值作为这个pixelalpha值,结果渲染到texture map,这样就可以用于下一个passnormal值当作textureRGB分量参与计算。】


14.2.2 Improving Performance

Even though the element-to-element shadow calculation is very fast (a GeForce 6800 can do 150 million of these calculations per second), we need to improve our algorithm to work on more than a couple of thousand elements in real time. We can reduce the amount of work by using simplified geometry for distant surfaces. This approach works well for diffuse lighting environments because the shadows are so soft that those cast by details in distant geometry are not visible. Fortunately, because we do not use the polygons themselves in our technique, we can create surface elements to represent simplified geometry without needing to create alternate polygonal models. We simply group elements whose vertices are neighbors in the original mesh and represent them with a single, larger element. We can do the same thing with the larger elements, creating fewer and even larger elements, forming a hierarchy. Now instead of traversing every single element for each pixel we render, we traverse the hierarchy of elements. If the receiver element is far enough away from the emitter—say, four times the radius of the emitter—we use it for our calculation. Only if the receiver is close to an emitter do we need to traverse its children (if it has any). See Figure 14-6. By traversing a hierarchy in this way, we can improve the performance of our algorithm from O(n 2) to O(n log n) in practice. The chart in Figure 14-7 shows that the performance per vertex stays consistent as the number of vertices in the hierarchy increases.

【其实这样的element to element(pixel to pixel)的计算已经很快了。我们要增强我们的算法来尽可能多的支持顶点(element/pixel)数。这里的想法就是通过空间几何关系,相邻的一些定点可以组合当作一个element group(计算的时候当作一个element)来处理,然后起作用再细分,就是一般层次化的方法。】

bgt_4_914_ambient_occlu_06.jpg

Figure 14-6 Hierarchical Elements

bgt_4_1014_ambient_occlu_07.jpg

Figure 14-7 Ambient Occlusion Shader Performance for Meshes of Different Densities

【性能图示】

We calculate a parent element’s data using its direct descendants in the hierarchy. We calculate the position and normal of a parent element by averaging the positions and normals of its children. We calculate its area as the sum of its children’s areas. We can use a shader for these calculations by making one pass of the shader for each level in the hierarchy, propagating the values from the leaf nodes up. We can then use the same technique to average the results of an occlusion pass that are needed for a following pass or simply treat parent nodes the same as children and avoid the averaging step. It is worth noting that the area of most animated elements varies little, if at all, even for nonrigid objects; therefore, the area does not have to be recalculated for each frame.

【这里交代父节点(高层次)的数据来源】

The ambient occlusion fragment shader appears in Listing 14-1.

【下面是完整的shader

bgt_4_11计算机生成了可选文字: Ambi entOcc1usi o at4 o at4 o at4 i for m 1 as tRe tMap i for m posi ti onMap i for m i for m oat o at4 o at4 o at3 o at3 o at3 oat o at4 or-mal = rNorma_1 rmnurl el em entNorm rEKUNIT3) nom COL posi t 1 vector receiva t used to calculate El at3 bent N oat oat H th recenvu nonal

 bgt_4_12

计算机生成了可选文字: (em t terlndewx texREcr om XYZ not shed Yaversal , emi t terlnd xy) emi t terlnd xy) ge t t calc squued. r qrt(d2). value = texREcr s receiver close to puent < —4*emitterkrea) (p s hav e go eruchy em 1 t terkre E, rNorma_1, eNorma_L modulate normal by last remlt bentNorma1 value total value; (1 only need normal for last else retu-n Eloat4 normali ze(bentNorma1),

Example 14-1. Ambient Occlusion Shader


14.3 Indirect Lighting and Area Lights

We can add an extra level of realism to rendered images by adding indirect lighting caused by light reflecting off diffuse surfaces (Tabellion 2004). We can add a single bounce of indirect light using a slight variation of the ambient occlusion shader. We replace the solid angle function with a disk-to-disk radiance transfer function. We use one pass of the shader to transfer the reflected or emitted light and two passes to shadow the light.

【直接光照和间接光照的阴影结果我们通过一个shader将结果合到一起。】

For indirect lighting, first we need to calculate the amount of light to reflect off the front face of each surface element. If the reflected light comes from environment lighting, then we compute the ambient occlusion data first and use it to compute the environment light that reaches each vertex. If we are using direct lighting from point or directional lights, we compute the light at each element just as if we are shading the surface, including shadow mapping. We can also do both environment lighting and direct lighting and sum the two results. We then multiply the light values by the color of the surface element, so that red surfaces reflect red, yellow surfaces reflect yellow, and so on. Area lights are handled just like light-reflective diffuse surfaces except that they are initialized with a light value to emit.

【这里解释怎么合兵:首先我们要得到直接光照的结果和OSAO的结果,直接光照结果的计算来自于一般的光照计算方法方法shadow map。亮度就是两种光照结果只和,颜色就是光线颜色。面积光就当作是发光表面来处理。】

Here is the fragment program function to calculate element-to-element radiance transfer:

element-to-element radiance transfer处理的代码片段】

bgt_4_13

计算机生成了可选文字: oat El o at3 El oats e oat o at3 oat that ttuArea has vided by PI ate ( dot

bgt_4_14ch14_eqn003.jpg

Equation 14-2 Disk-to-Disk Form Factor Approximation

We calculate the amount of light transferred from one surface element to another using the geometric term of the disk-to-disk form factor given in Equation 14-2. We leave off the visibility factor, which takes into account blocking (occluding) geometry. Instead we use a shadowing technique like the one we used for calculating ambient occlusion—only this time we use the same form factor that we used to transfer the light. Also, we multiply the shadowing element’s form factor by the three-component light value instead of a single-component accessibility value.

【我们使用上面的公式来计算光线从一个element transfer 到另一个。也就是说我们这里用了OSAO那种思想来做光线的传播。】

We now run one pass of our radiance-transfer shader to calculate the maximum amount of reflected or emitted light that can reach any element. Then we run a shadow pass that subtracts from the total light at each element based on how much light reaches the shadowing elements. Just as with ambient occlusion, we can run another pass to improve the lighting by removing double shadowing. Figure 14-8 shows a scene lit with direct lighting plus one and two bounces of indirect lighting.

【我们首先用一个pass来跑radiance-transfer shader来计算element之间的光线的发出和反射来得到每一个element的光线总和,接着跑shadow pass:从到达element的光线总和的结果再减去这个pass计算的结果就是AO的结果,处理多重阴影的覆盖问题就是通过多个pass和参数解,见上面的讲解。下图展示结果】

bgt_4_1514_ambient_occlu_08.jpg

Figure 14-8 Combining Direct and Indirect Lighting


14.4 Conclusion

Global illumination techniques such as ambient occlusion and indirect lighting greatly enhance the quality of rendered diffuse surfaces. We have presented a new technique for calculating light transfer to and from diffuse surfaces using the GPU. This technique is suitable for implementing various global illumination effects in dynamic scenes with deformable geometry.

【废话不解释】


14.5 References

Landis, Hayden. 2002. “Production-Ready Global Illumination.” Course 16 notes, SIGGRAPH 2002.

Pharr, Matt, and Simon Green. 2004. “Ambient Occlusion.” In GPU Gems, edited by Randima Fernando, pp. 279–292. Addison-Wesley.

Sloan, Peter-Pike, Jan Kautz, and John Snyder. 2002. “Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments.” ACM Transactions on Graphics (Proceedings of SIGGRAPH 2002) 21(3), pp. 527–536.

Tabellion, Eric, and Arnauld Lamorlette. 2004. “An Approximate Global Illumination System for Computer Generated Films.” ACM Transactions on Graphics (Proceedings of SIGGRAPH 2004) 23(3), pp. 469–476.

Screen Space Ambient Occlusion(SSAO)

  • BACKGROUND

Ambient occlusion is an approximation of the amount by which a point on a surface is occluded by the surrounding geometry, which affects the accessibility of that point by incoming light. (主要看是否靠近物体)

In effect, ambient occlusion techniques allow the simulation of proximity shadows – the soft shadows that you see in the corners of rooms and the narrow spaces between objects. (用于模拟软阴影)

Ambien occlusion is often subtle, but will dramatically improve the visual realism of a computer-generated scene:

bgt_3_1

 

The basic idea is to compute an occlusion factor(阻塞要素) for each point on a surface and incorporate(合并) this into the lighting model, usually by modulating the ambient term such that more occlusion = less light, less occlusion = more light. Computing the occlusion factor can be expensive; offline renderers typically do it by casting a large number of rays in a normal-oriented hemisphere to sample the occluding geometry around a point. In general this isn’t practical for realtime rendering.

To achieve interactive frame rates, computing the occlusion factor needs to be optimized as far as possible. One option is to pre-calculate it, but this limits how dynamic a scene can be (the lights can move around, but the geometry can’t).(速度是大问题)

  • CRYSIS METHOD

Way back in 2007, Crytek implemented a realtime solution for Crysis, which quickly became the yardstick for game graphics. The idea is simple: use per-fragment depth information as an approximation of the scene geometry and calculate the occlusion factor in screen space. This means that the whole process can be done on the GPU, is 100% dynamic and completely independent of scene complexity. Here we’ll take a quick look at how the Crysis method works, then look at some enhancements.

Rather than(与其) cast(投射) rays in a hemisphere, Crysis samples the depth buffer at points derived(来源) from samples in a sphere:[在深度buffer以当前点为中心的一个圆内取sample]

bgt_3_2

 

This works in the following way:

  • project each sample point into screen space to get the coordinates into the depth buffer(获得深度图及坐标)
  • sample the depth buffer(取深度图的sample)
  • if the sample position is behind the sampled depth (i.e. inside geometry), it contributes to the occlusion factor(sample平均值小于其本身深度值,则起作用)

Clearly the quality of the result is directly proportional to the number of samples, which needs to be minimized in order to achieve decent performance. Reducing the number of samples, however, produces ugly ‘banding’ artifacts in the result. This problem is remedied by randomly rotating the sample kernel at each pixel, trading banding for high frequency noise which can be removed by blurring the result.

bgt_3_3

The Crysis method produces occlusion factors with a particular ‘look’ – because the sample kernel is a sphere, flat walls end up looking grey because ~50% of the samples end up being inside the surrounding geometry. Concave corners darken as expected, but convex ones appear lighter since fewer samples fall inside geometry. Although these artifacts are visually acceptable, they produce a stylistic effect which strays somewhat from photorealism.

  • NORMAL-ORIENTED HEMISPHERE

Rather than sample a spherical kernel at each pixel, we can sample within a hemisphere, oriented along the surface normal at that pixel. This improves the look of the effect with the penalty of requiring per-fragment normal data. For a deferred renderer, however, this is probably already available, so the cost is minimal (especially when compared with the improved quality of the result).

(改进:去法线方向的半球内的sample)

bgt_3_4

  • Generating the Sample Kernel

The first step is to generate the sample kernel itself. The requirements are that

  • sample positions fall within the unit hemisphere
  • sample positions are more densely clustered towards the origin. This effectively attenuates the occlusion contribution according to distance from the kernel centre – samples closer to a point occlude it more than samples further away

Generating the hemisphere is easy:

This creates sample points on the surface of a hemisphere oriented along the z axis.(先建一个标准半球) The choice of orientation is arbitrary(随意) – it will only affect the way we reorient the kernel in the shader. The next step is to scale each of the sample positions to distribute them within the hemisphere. This is most simply done as:

which will produce an evenly distributed set of points. What we actually want is for the distance from the origin to falloff as we generate more points, according to a curve like this:(权重和距离相关)

bgt_3_5

  • Generating the Noise Texture

Next we need to generate a set of random values used to rotate the sample kernel, which will effectively increase the sample count and minimize the ‘banding’ artefacts mentioned previously.

Note that the z component is zero; since our kernel is oriented along the z-axis, we want the random rotation to occur around that axis.(竟然是random rotation!难道不能是顶点或者面法线更符合实际情况?)

These random values are stored in a texture and tiled over(铺满) the screen. The tiling of the texture causes the orientation of the kernel to be repeated and introduces regularity into the result. By keeping the texture size small we can make this regularity occur at a high frequency, which can then be removed with a blur step that preserves the low-frequency detail of the image. Using a 4×4 texture and blur kernel produces excellent results at minimal cost. This is the same approach as used in Crysis.

  • The SSAO Shader

With all the prep work done, we come to the meat of the implementation: the shader itself. There are actually two passes: calculating the occlusion factor, then blurring the result.

Calculating the occlusion factor requires first obtaining the fragment’s view space position and normal:

I reconstruct the view space position by combining the fragment’s linear depth with the interpolated vViewRay. See Matt Pettineo’s blog for a discussion of other methods for reconstructing position from depth. The important thing is that origin ends up being the fragment’s view space position.

Retrieving(检索) the fragment’s normal is a little more straightforward(直截了当); the scale/bias and normalization steps are necessary unless you’re using some high precision format to store the normals:

Next we need to construct a change-of-basis matrix to reorient our sample kernel along the origin’s normal. We can cunningly(巧妙地) incorporate(合并) the random rotation here, as well:

(这儿可以看作shader 如何使用random数范例)

The first line retrieves a random vector rvec from our noise texture. uNoiseScale is a vec2 which scales vTexcoord to tile the noise texture. So if our render target is 1024×768 and our noise texture is 4×4, uNoiseScale would be (1024 / 4, 768 / 4). (This can just be calculated once when initialising the noise texture and passed in as a uniform).

The next three lines use the Gram-Schmidt process to compute an orthogonal basis, incorporating our random rotation vector rvec.

The last line constructs the transformation matrix from our tangent, bitangent and normal vectors. The normal vector fills the z component of our matrix because that is the axis along which the base kernel is oriented.

Next we loop through the sample kernel (passed in as an array of vec3, uSampleKernel), sample the depth buffer and accumulate the occlusion factor:

Getting the view space sample position is simple; we multiply by our orientation matrix tbn, then scale the sample by uRadius (a nice artist-adjustable factor, passed in as a uniform) then add the fragment’s view space position origin.

We now need to project sample (which is in view space) back into screen space to get the texture coordinates with which we sample the depth buffer. This step follows the usual process – multiply by the current projection matrix (uProjectionMat), perform w-divide then scale and bias to get our texture coordinate: offset.xy.

Next we read sampleDepth out of the depth buffer (uTexLinearDepth). If this is in front of the sample position, the sample is ‘inside’ geometry and contributes to occlusion. If sampleDepth is behind the sample position, the sample doesn’t contribute to the occlusion factor. Introducing a rangeCheck helps to prevent erroneous occlusion between large depth discontinuities:

bgt_3_6

As you can see, rangeCheck works by zeroing any contribution from outside the sampling radius.

The final step is to normalize the occlusion factor and invert it, in order to produce a value that can be used to directly scale the light contribution.

  • The Blur Shader

The blur shader is very simple: all we want to do is average a 4×4 rectangle around each pixel to remove the 4×4 noise pattern:

The only thing to note in this shader is uTexelSize, which allows us to accurately sample texel centres based on the resolution of the AO render target.

bgt_3_7

  • CONCLUSION

The normal-oriented hemisphere method produces a more realistic-looking than the basic Crysis method, without much extra cost, especially when implemented as part of a deferred renderer where the extra per-fragment data is readily available. It’s pretty scalable, too – the main performance bottleneck is the size of the sample kernel, so you can either go for fewer samples or have a lower resolution AO target.

A demo implementation is available here.

Anti-aliasing

抗锯齿(英语:anti-aliasing,简称AA),也译为抗锯齿或边缘柔化、消除混叠、抗图像折叠有损等。它是一种消除显示器输出的画面中图物边缘出现凹凸锯齿的技术,那些凹凸的锯齿通常因为高分辨率的信号以低分辨率表示或无法准确运算出3D图形坐标定位时所导致的图形混叠(aliasing)而产生的,反锯齿技术能有效地解决这些问题。它通常被用在在数字信号处理、数字摄影、电脑绘图与数码音效及电子游戏等方面,柔化被混叠的数字信号。


超级采样抗锯齿(SSAA

超级采样抗锯齿(Super-Sampling Anti-aliasing,简称SSAA)此是早期抗锯齿方法,比较消耗资源,但简单直接,先把图像映射到缓存并把它放大,再用超级采样把放大后的图像像素进行采样,一般选取2个或4个邻近像素,把这些采样混合起来后,生成的最终像素,令每个像素拥有邻近像素的特征,像素与像素之间的过渡色彩,就变得近似,令图形的边缘色彩过渡趋于平滑。再把最终像素还原回原来大小的图像,并保存到帧缓存也就是显存中,替代原图像存储起来,最后输出到显示器,显示出一帧画面。这样就等于把一幅模糊的大图,通过细腻化后再缩小成清晰的小图。如果每帧都进行抗锯齿处理,游戏或视频中的所有画面都带有抗锯齿效果。而将图像映射到缓存并把它放大时,放大的倍数被用于分别抗锯齿的效果,如:图1AA后面的x2x4x8就是原图放大的倍数。 超级采样抗锯齿中使用的采样法一般有两种:

1.顺序栅格超级采样(Ordered Grid Super-Sampling,简称OGSS),采样时选取2个邻近像素。

2.旋转栅格超级采样(Rotated Grid Super-Sampling,简称RGSS),采样时选取4个邻近像素。


多重采样抗锯齿(MSAA

多重采样抗锯齿(MultiSampling Anti-Aliasing,简称MSAA)是一种特殊的超级采样抗锯齿(SSAA)。MSAA首先来自于OpenGL。具体是MSAA只对Z缓存(Z-Buffer)和模板缓存(Stencil Buffer)中的数据进行超级采样抗锯齿的处理。可以简单理解为只对多边形的边缘进行抗锯齿处理。这样的话,相比SSAA对画面中所有数据进行处理,MSAA对资源的消耗需求大大减弱,不过在画质上可能稍有不如SSAA


覆盖采样抗锯齿(CSAA

覆盖采样抗锯齿(CoverageSampling Anti-Aliasing,简称CSAA)是nVidiaG80及其衍生产品首次推向实用化的AA技术,也是目前nVidia GeForce 8/9/G200系列独享的AA技术。CSAA就是在MSAA基础上更进一步的节省显存使用量及带宽,简单说CSAA就是将边缘多边形里需要取样的子像素坐标覆盖掉,把原像素坐标强制安置在硬件和驱动程序预先算好的坐标中。这就好比取样标准统一的MSAA,能够最高效率的执行边缘取样,效能提升非常的显著。比方说16xCSAA取样性能下降幅度仅比4xMSAA略高一点,处理效果却几乎和8xMSAA一样。8xCSAA有着4xMSAA的处理效果,性能消耗却和2xMSAA相同。[1]

NVIDIA已经移除了CSAA,可能这种抗锯齿技术有点落伍了吧,论画质不如TXAA,论性能不如FXAA,而且只有NVIDIA支持,兼容性也是个问题。


可编程过滤抗锯齿(CFAA)

可编程过滤抗锯齿(Custom Filter Anti-Aliasing)技术起源于AMD-ATIR600家庭。简单地说CFAA就是扩大取样面积的MSAA,比方说之前的MSAA是严格选取物体边缘像素进行缩放的,而CFAA则可以通过驱动和谐灵活地选择对影响锯齿效果较大的像素进行缩放,以较少的性能牺牲换取平滑效果。显卡资源占用也比较小。


快速近似抗锯齿(FXAA)

快速近似抗锯齿(Fast Approximate Anti-Aliasing) 它是传统MSAA(多重采样抗锯齿)效果的一种高性能近似值。它是一种单程像素着色器,和MLAA一样运行于目标游戏渲染管线的后期处理阶段,但不像后者那样使用DirectCompute,而只是单纯的后期处理着色器,不依赖于任何GPU计算API。正因为如此,FXAA技术对显卡没有特殊要求,完全兼容NVIDIAAMD的不同显卡(MLAA仅支持A)DX9DX10DX11


时间性抗锯齿(TXAA/TAA

TXAA的原理就是通过HDR后处理管线从硬件层面上提供颜色矫正处理,后期处理的方式实际上原理和FXAA差不多:整合硬件AA以及类似于CG电影中所采用的复杂的高画质过滤器,来减少抗锯齿中出现的撕裂和抖动现象。

但是如果实现比FXAA更强画质以及更流畅的体验,则只能通过游戏的开发上实现TXAA了。所以TXAA是一种后发式的抗锯齿技术,并不像FXAA那样具有通用性,而是通过游戏来进行优化,这样的一种专用性使得TXAA的执行效率是最高的。

所以,TXAA是一种新的抗锯齿,是需要重新研发加入TXAA的代码来支持

TXAA 是一种全新的电影风格抗锯齿技术,旨在减少时间性锯齿 (运动中的蠕动和闪烁) 该技术集时间性过滤器、硬件抗锯齿以及定制的 CG 电影式抗锯齿解算法于一身。 要过滤屏幕上任意特定的像素,TXAA 需要使用像素内部和外部的采样以及之前帧中的采样,以便提供超级高画质的过滤。 TXAA 在标准 2xMSAA 4xMSAA 的基础上改进了时间性过滤。 例如,在栅栏或植物上以及在运动画面中,TXAA 已经开始接近、有时甚至超过了其它高端专业抗锯齿算法的画质。TXAA 由于采用更高画质的过滤,因而与传统 MSAA 较低画质的过滤相比,图像更加柔和。

bgt_2_1


多帧采样抗锯齿(MFAA

NVIDIA(英伟达)根据MSAA改进出的一种抗锯齿技术。目前只有使用麦克斯韦架构GPU的显卡才可以使用。在 Maxwell 上,英伟达推出了用于光栅化的可编程采样位置,它们被存储在随机存取存储器 (RAM) 中。如此一来便为更灵活、更创新的全新抗锯齿技术创造了机会,这类抗锯齿技术能够独特地解决现代游戏引擎所带来的难题,例如高画质抗锯齿对性能的更高要求。只要在NVIDIA控制面板里为程序开启MFAA并在游戏中选择MSAA就可以开启。画面表现明显强于同级别的MSAA,这种全新抗锯齿技术在提升边缘画质的同时能够将性能代价降至最低。通过在时间和空间两方面交替使用抗锯齿采样格式,4xMFAA 的性能代价仅相当于 2xMSAA,但是抗锯齿效果却与 4xMSAA 相当。[3]

支持MFAA的显卡(GPU):GTX TITAN ZGTX TITAN XGTX980TiGTX980GTX970GTX960GTX950[4]

Defered/Forward Rendering

http://www.cnblogs.com/polobymulberry/p/5126892.html

1. rendering path的技术基础

在介绍各种光照渲染方式之前,首先必须介绍一下现代的图形渲染管线。这是下面提到的几种Rendering Path的技术基础。

bgt_1_1

目前主流的游戏和图形渲染引擎,包括底层的API(如DirectXOpenGL)都开始支持现代的图形渲染管线。现代的渲染管线也称为可编程管线(Programmable Pipeline),简单点说就是将以前固定管线写死的部分(比如顶点的处理,像素颜色的处理等等)变成在GPU上可以进行用户自定义编程的部分,好处就是用户可以自由发挥的空间增大,缺点就是必须用户自己实现很多功能。

下面简单介绍下可编程管线的流程。以OpenGL绘制一个三角形举例。首先用户指定三个顶点传给Vertex Shader。然后用户可以选择是否进行Tessellation Shader(曲面细分可能会用到)和Geometry Shader(可以在GPU上增删几何信息)。紧接着进行光栅化,再将光栅化后的结果传给Fragment Shader进行pixel级别的处理。最后将处理的像素传给FrameBuffer并显示到屏幕上。

2. 几种常用的Rendering Path

Rendering Path其实指的就是渲染场景中光照的方式。由于场景中的光源可能很多,甚至是动态的光源。所以怎么在速度和效果上达到一个最好的结果确实很困难。以当今的显卡发展为契机,人们才衍生出了这么多的Rendering Path来处理各种光照。

2.1 Forward Rendering

bgt_1_2

Forward Rendering是绝大数引擎都含有的一种渲染方式。要使用Forward Rendering,一般在Vertex Shader或Fragment Shader阶段对每个顶点或每个像素进行光照计算,并且是对每个光源进行计算产生最终结果。下面是Forward Rendering的核心伪代码[1]。

比如在Unity3D 4.x引擎中,对于下图中的圆圈(表示一个Geometry),进行Forward Rendering处理。

bgt_1_3

将得到下面的处理结果

bgt_1_4

也就是说,对于ABCD四个光源我们在Fragment Shader中我们对每个pixel处理光照,对于DEFG光源我们在Vertex Shader中对每个vertex处理光照,而对于GH光源,我们采用球调和(SH)函数进行处理。

Forward Rendering优缺点

很明显,对于Forward Rendering,光源数量对计算复杂度影响巨大,所以比较适合户外这种光源较少的场景(一般只有太阳光)。

但是对于多光源,我们使用Forward Rendering的效率会极其低下。光源数目和复杂度是成线性增长的。

对此,我们需要进行必要的优化。比如

1.多在vertex shader中进行光照处理,因为有一个几何体有10000个顶点,那么对于n个光源,至少要在vertex shader中计算10000n次。而对于在fragment shader中进行处理,这种消耗会更多,因为对于一个普通的1024×768屏幕,将近有8百万的像素要处理。所以如果顶点数小于像素个数的话,尽量在vertex shader中进行光照。

2.如果要在fragment shader中处理光照,我们大可不必对每个光源进行计算时,把所有像素都对该光源进行处理一次。因为每个光源都有其自己的作用区域。比如点光源的作用区域是一个球体,而平行光的作用区域就是整个空间了。对于不在此光照作用区域的像素就不进行处理。但是这样做的话,CPU端的负担将加重,因为要计算作用区域。

3.对于某个几何体,光源对其作用的程度是不同,所以有些作用程度特别小的光源可以不进行考虑。典型的例子就是Unity中只考虑重要程度最大的4个光源。

2.2 Deferred Rendering

bgt_1_5

Deferred Rendering(延迟渲染)顾名思义,就是将光照处理这一步骤延迟一段时间再处理。具体做法就是将光照处理这一步放在已经三维物体生成二维图片之后进行处理。也就是说将物空间的光照处理放到了像空间进行处理。要做到这一步,需要一个重要的辅助工具——G-Buffer。G-Buffer主要是用来存储每个像素对应的Position,Normal,Diffuse Color和其他Material parameters。根据这些信息,我们就可以在像空间中对每个像素进行光照处理[3]。下面是Deferred Rendering的核心伪代码。

下面简单举个例子。

首先我们用存储各种信息的纹理图。比如下面这张Depth Buffer,主要是用来确定该像素距离视点的远近的。

bgt_1_6

. Depth Buffer

根据反射光的密度/强度分度图来计算反射效果。

bgt_1_7

.Specular Intensity/Power

下图表示法向数据,这个很关键。进行光照计算最重要的一组数据。

bgt_1_8

.Normal Buffer

下图使用了Diffuse Color Buffer。

bgt_1_9

.Diffuse Color Buffer

这是使用Deferred Rendering最终的结果。

bgt_1_10

.Deferred Lighting Results

Deferred Rendering的最大的优势就是将光源的数目和场景中物体的数目在复杂度层面上完全分开。也就是说场景中不管是一个三角形还是一百万个三角形,最后的复杂度不会随光源数目变化而产生巨大变化。从上面的伪代码可以看出deferred rendering的复杂度为 。

但是Deferred Rendering局限性也是显而易见。比如我在G-Buffer存储以下数据

bgt_1_11

这样的话,对于一个普通的1024×768的屏幕分辨率。总共得使用1024x768x128bit=20MB,对于目前的动则上GB的显卡内存,可能不算什么。但是使用G-Buffer耗费的显存还是很多的。一方面,对于低端显卡,这么大的显卡内存确实很耗费资源。另一方面,如果要渲染更酷的特效,使用的G-Buffer大小将增加,并且其增加的幅度也是很可观的。顺带说一句,存取G-Buffer耗费的带宽也是一个不可忽视的缺陷。

对于Deferred Rendering的优化也是一个很有挑战的问题。下面简单介绍几种降低Deferred Rendering存取带宽的方式。最简单也是最容易想到的就是将存取的G-Buffer数据结构最小化,这也就衍生出了light pre-pass方法。另一种方式是将多个光照组成一组,然后一起处理,这种方法衍生了Tile-based deferred Rendering。

2.2.1 Light Pre-Pass

Light Pre-Pass最早是由Wolfgang Engel在他的博客[2]中提到的。具体的做法是

(1)只在G-Buffer中存储Z值和Normal值。对比Deferred Render,少了Diffuse Color Specular Color以及对应位置的材质索引值。

(2)FS阶段利用上面的G-Buffer计算出所必须的light properties,比如Normal*LightDir,LightColor,Specularlight properties。将这些计算出的光照进行alpha-blend并存入LightBuffer(就是用来存储light propertiesbuffer)。

(3)最后将结果送到forward rendering渲染方式计算最后的光照效果。

相对于传统的Deferred Render,使用Light Pre-Pass可以对每个不同的几何体使用不同的shader进行渲染,所以每个物体的material properties将有更多变化。这里我们可以看出相对于传统的Deferred Render,它的第二步(见伪代码)是遍历每个光源,这样就增加了光源设置的灵活性,而Light Pre-Pass第三步使用的其实是forward rendering,所以可以对每个mesh设置其材质,这两者是相辅相成的,有利有弊。另一个Light Pre-Pass的优点是在使用MSAA上很有利。虽然并不是100%使用上了MSAA(除非使用DX10/11的特性),但是由于使用了Z值和Normal值,就可以很容易找到边缘,并进行采样。

下面这两张图,左边是使用传统Deferred Render绘制的,右边是使用Light Pre-Pass绘制的。这两张图在效果上不应该有太大区别。

bgt_1_12

2.2.2 Tile-Based Deferred Rendering

TBDR主要思想就是将屏幕分成一个个小块tile。然后根据这些Depth求得每个tilebounding box。对每个tilebounding boxlight进行求交,这样就得到了对该tile有作用的light的序列。最后根据得到的序列计算所在tile的光照效果。[4][5]

对比Deferred Render,之前是对每个光源求取其作用区域light volume,然后决定其作用的的pixel,也就是说每个光源要求取一次。而使用TBDR,只要遍历每个pixel,让其所属tile与光线求交,来计算作用其上的light,并利用G-Buffer进行Shading。一方面这样做减少了所需考虑的光源个数,另一方面与传统的Deferred Rendering相比,减少了存取的带宽。

2.3 Forward+


Forward+ == Forward + Light Culling[6]Forward+ 很类似Tiled-based Deferred Rendering。其具体做法就是先对输入的场景进行z-prepass,也就是说关闭写入color,只向z-buffer写入z值。注意此步骤是Forward+必须的,而其他渲染方式是可选的。接下来来的步骤和TBDR很类似,都是划分tiles,并计算bounding box。只不过TBDR是在G-Buffer中完成这一步骤的,而Forward+是根据Z-Buffer。最后一步其实使用的是forward方式,即在FS阶段对每个pixel根据其所在tilelight序列计算光照效果。而TBDR使用的是基于G-Bufferdeferred rendering

实际上,forward+deferred运行的更快。我们可以看出由于Forward+只要写深度缓存就可以,而Deferred Render除了深度缓存,还要写入法向缓存。而在Light Culling步骤,Forward+只需要计算出哪些light对该tile有影响即可。而Deferred Render还在这一部分把光照处理给做了。而这一部分,Forward+是放在Shading阶段做的。所以Shading阶段Forward+耗费更多时间。但是对目前硬件来说,Shading耗费的时间没有那么多。

bgt_1_13

Forward+的优势还有很多,其实大多就是传统Forward Rendering本身的优势,所以Forward+更像一个集各种Rendering Path优势于一体的Rendering Path

bgt_1_14

3. 总结

首先我们列出Rendering Equation,然后对比Forward RenderingDeferred RenderingForward+ Rendering[6]

3.1 Rendering Equation

其中点 处有一入射光,其光强为 ,入射角度为 。根据函数 来计算出射角为 处的出射光强度。最后在辅以出射光的相对于视点可见性 。注意此处的 为场景中总共有 个光源。

image

 bgt_1_15

3.2 Forward Renderng

由于Forward本身对多光源支持力度不高,所以此处对于每个点 的处理不再考虑所有的 个光源,仅仅考虑少量的或者说经过挑选的 个光源。可以看出这样的光照效果并不完美。另外,每个光线的 是计算不了的。

bgt_1_16image

3.3 Deferred Rendering

由于Deferred Rendering使用了light culling,所以不用遍历场景中的所有光源,只需遍历经过light culling后的 个光源即可。并且Deferred Rendering将计算BxDF的部分单独分出来了。

bgt_1_17image

3.4 Forward+ Rendering

可以看出Forward+Forward最大区别就是光源的挑选上有了很到改进。

bgt_1_18image

参考文献

[1] Shawn Hargreaves. (2004) “Deferred Shading”. [Online] Available:

http://hall.org.ua/halls/wizzard/books/articles-cg/DeferredShading.pdf (April 15,2015)

[2] Wolfgang Engel. (March 16, 2008) “Light Pre-Pass Renderer”. [Online] Available:

http://diaryofagraphicsprogrammer.blogspot.com/2008/03/light-pre-pass-renderer.html(April 14,2015)

[3] Klint J. Deferred Rendering in Leadwerks Engine[J]. Copyright Leadwerks Corporation, 2008.

[4] 龚敏敏.(April 22, 2012) “Forward框架的逆袭:解析Forward+渲染”. [Online] Available:

http://www.cnblogs.com/gongminmin/archive/2012/04/22/2464982.html(April 13,2015)

[5] Lauritzen A. Deferred rendering for current and future rendering pipelines[J]. SIGGRAPH Course: Beyond Programmable Shading, 2010: 1-34.

[6] Harada T, McKee J, Yang J C. Forward+: Bringing deferred lighting to the next level[J]. 2012.