Screen Space Ambient Occlusion(SSAO) | Cheney Shen

Technology blog

Screen Space Ambient Occlusion(SSAO)

  • BACKGROUND

Ambient occlusion is an approximation of the amount by which a point on a surface is occluded by the surrounding geometry, which affects the accessibility of that point by incoming light. (主要看是否靠近物体)

In effect, ambient occlusion techniques allow the simulation of proximity shadows – the soft shadows that you see in the corners of rooms and the narrow spaces between objects. (用于模拟软阴影)

Ambien occlusion is often subtle, but will dramatically improve the visual realism of a computer-generated scene:

bgt_3_1

 

The basic idea is to compute an occlusion factor(阻塞要素) for each point on a surface and incorporate(合并) this into the lighting model, usually by modulating the ambient term such that more occlusion = less light, less occlusion = more light. Computing the occlusion factor can be expensive; offline renderers typically do it by casting a large number of rays in a normal-oriented hemisphere to sample the occluding geometry around a point. In general this isn’t practical for realtime rendering.

To achieve interactive frame rates, computing the occlusion factor needs to be optimized as far as possible. One option is to pre-calculate it, but this limits how dynamic a scene can be (the lights can move around, but the geometry can’t).(速度是大问题)

  • CRYSIS METHOD

Way back in 2007, Crytek implemented a realtime solution for Crysis, which quickly became the yardstick for game graphics. The idea is simple: use per-fragment depth information as an approximation of the scene geometry and calculate the occlusion factor in screen space. This means that the whole process can be done on the GPU, is 100% dynamic and completely independent of scene complexity. Here we’ll take a quick look at how the Crysis method works, then look at some enhancements.

Rather than(与其) cast(投射) rays in a hemisphere, Crysis samples the depth buffer at points derived(来源) from samples in a sphere:[在深度buffer以当前点为中心的一个圆内取sample]

bgt_3_2

 

This works in the following way:

  • project each sample point into screen space to get the coordinates into the depth buffer(获得深度图及坐标)
  • sample the depth buffer(取深度图的sample)
  • if the sample position is behind the sampled depth (i.e. inside geometry), it contributes to the occlusion factor(sample平均值小于其本身深度值,则起作用)

Clearly the quality of the result is directly proportional to the number of samples, which needs to be minimized in order to achieve decent performance. Reducing the number of samples, however, produces ugly ‘banding’ artifacts in the result. This problem is remedied by randomly rotating the sample kernel at each pixel, trading banding for high frequency noise which can be removed by blurring the result.

bgt_3_3

The Crysis method produces occlusion factors with a particular ‘look’ – because the sample kernel is a sphere, flat walls end up looking grey because ~50% of the samples end up being inside the surrounding geometry. Concave corners darken as expected, but convex ones appear lighter since fewer samples fall inside geometry. Although these artifacts are visually acceptable, they produce a stylistic effect which strays somewhat from photorealism.

  • NORMAL-ORIENTED HEMISPHERE

Rather than sample a spherical kernel at each pixel, we can sample within a hemisphere, oriented along the surface normal at that pixel. This improves the look of the effect with the penalty of requiring per-fragment normal data. For a deferred renderer, however, this is probably already available, so the cost is minimal (especially when compared with the improved quality of the result).

(改进:去法线方向的半球内的sample)

bgt_3_4

  • Generating the Sample Kernel

The first step is to generate the sample kernel itself. The requirements are that

  • sample positions fall within the unit hemisphere
  • sample positions are more densely clustered towards the origin. This effectively attenuates the occlusion contribution according to distance from the kernel centre – samples closer to a point occlude it more than samples further away

Generating the hemisphere is easy:

This creates sample points on the surface of a hemisphere oriented along the z axis.(先建一个标准半球) The choice of orientation is arbitrary(随意) – it will only affect the way we reorient the kernel in the shader. The next step is to scale each of the sample positions to distribute them within the hemisphere. This is most simply done as:

which will produce an evenly distributed set of points. What we actually want is for the distance from the origin to falloff as we generate more points, according to a curve like this:(权重和距离相关)

bgt_3_5

  • Generating the Noise Texture

Next we need to generate a set of random values used to rotate the sample kernel, which will effectively increase the sample count and minimize the ‘banding’ artefacts mentioned previously.

Note that the z component is zero; since our kernel is oriented along the z-axis, we want the random rotation to occur around that axis.(竟然是random rotation!难道不能是顶点或者面法线更符合实际情况?)

These random values are stored in a texture and tiled over(铺满) the screen. The tiling of the texture causes the orientation of the kernel to be repeated and introduces regularity into the result. By keeping the texture size small we can make this regularity occur at a high frequency, which can then be removed with a blur step that preserves the low-frequency detail of the image. Using a 4×4 texture and blur kernel produces excellent results at minimal cost. This is the same approach as used in Crysis.

  • The SSAO Shader

With all the prep work done, we come to the meat of the implementation: the shader itself. There are actually two passes: calculating the occlusion factor, then blurring the result.

Calculating the occlusion factor requires first obtaining the fragment’s view space position and normal:

I reconstruct the view space position by combining the fragment’s linear depth with the interpolated vViewRay. See Matt Pettineo’s blog for a discussion of other methods for reconstructing position from depth. The important thing is that origin ends up being the fragment’s view space position.

Retrieving(检索) the fragment’s normal is a little more straightforward(直截了当); the scale/bias and normalization steps are necessary unless you’re using some high precision format to store the normals:

Next we need to construct a change-of-basis matrix to reorient our sample kernel along the origin’s normal. We can cunningly(巧妙地) incorporate(合并) the random rotation here, as well:

(这儿可以看作shader 如何使用random数范例)

The first line retrieves a random vector rvec from our noise texture. uNoiseScale is a vec2 which scales vTexcoord to tile the noise texture. So if our render target is 1024×768 and our noise texture is 4×4, uNoiseScale would be (1024 / 4, 768 / 4). (This can just be calculated once when initialising the noise texture and passed in as a uniform).

The next three lines use the Gram-Schmidt process to compute an orthogonal basis, incorporating our random rotation vector rvec.

The last line constructs the transformation matrix from our tangent, bitangent and normal vectors. The normal vector fills the z component of our matrix because that is the axis along which the base kernel is oriented.

Next we loop through the sample kernel (passed in as an array of vec3, uSampleKernel), sample the depth buffer and accumulate the occlusion factor:

Getting the view space sample position is simple; we multiply by our orientation matrix tbn, then scale the sample by uRadius (a nice artist-adjustable factor, passed in as a uniform) then add the fragment’s view space position origin.

We now need to project sample (which is in view space) back into screen space to get the texture coordinates with which we sample the depth buffer. This step follows the usual process – multiply by the current projection matrix (uProjectionMat), perform w-divide then scale and bias to get our texture coordinate: offset.xy.

Next we read sampleDepth out of the depth buffer (uTexLinearDepth). If this is in front of the sample position, the sample is ‘inside’ geometry and contributes to occlusion. If sampleDepth is behind the sample position, the sample doesn’t contribute to the occlusion factor. Introducing a rangeCheck helps to prevent erroneous occlusion between large depth discontinuities:

bgt_3_6

As you can see, rangeCheck works by zeroing any contribution from outside the sampling radius.

The final step is to normalize the occlusion factor and invert it, in order to produce a value that can be used to directly scale the light contribution.

  • The Blur Shader

The blur shader is very simple: all we want to do is average a 4×4 rectangle around each pixel to remove the 4×4 noise pattern:

The only thing to note in this shader is uTexelSize, which allows us to accurately sample texel centres based on the resolution of the AO render target.

bgt_3_7

  • CONCLUSION

The normal-oriented hemisphere method produces a more realistic-looking than the basic Crysis method, without much extra cost, especially when implemented as part of a deferred renderer where the extra per-fragment data is readily available. It’s pretty scalable, too – the main performance bottleneck is the size of the sample kernel, so you can either go for fewer samples or have a lower resolution AO target.

A demo implementation is available here.


Post a Comment

Your email address will not be published. Required fields are marked *

  • Categories

  • Tags