Tag: Volume

SIGGRAPH 15 – The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn


 
 

Forslides with proper formatting and video/audio use the PPTX version.

 
 

The following was presented at SIGGRAPH 2015 as part of the Advances in Real-time rendering Course. http://advances.realtimerendering.com

 
 

Authors: Andrew Schneider –Principal FX Artist, Nathan Vos –Principal Tech Programmer

 
 


 
 

Thank you for coming.

 
 

Over the next half hour I am going to be breaking down and explaining the cloud system for Horizon Zero Dawn.

【接下来介绍cloud system】

 
 

As Natasha mentioned, my background is in Animated film VFX, with experience programming for voxel systems including clouds.

【作者原来是做动画电影特效的,有voxel system基础】

 
 

This was co-developed between myself and a programmer named Nathan Vos. He could not be here today, but his work is an important part of what we were able to achieve with this.

 
 

Horizon was just announced at E3 this year, and this is the first time that we are sharing some of our new tech with the community. What you are seeing here renders in about 2 milliseconds, takes 20 mbof ram and completely replaces our asset based cloud solutions in previous games.

 
 

Before I dive into our approach and justification for those 2 milliseconds, let me give you a little background to explain why we ended up developing a procedural volumetric system for skies in the first place.

【现讲一下使用procedural volumetric system for skies的背景】

 
 

In the past, Guerrilla has been known for the KILLZONE series of games, which are first person shooters .

 
 


 
 

FPS usually restrict the player to a predefined track, which means that we could hand place elements like clouds using billboards and highly detailed sky domes to create a heavily art directed sky.

【FPS经常限制玩家在一个预定的轨道上,因此像云这样的系统使用高质量的天空盒和billboard就可以了。】

 
 

These domes and cards were built in Photoshop by one artist using stock photography. As Time of day was static in the KILLZONE series, we could pre-bake our lighting to one set of images, which kept ram usage and processing low.

【上面这个demo就是这干的】

 
 

By animating these dome shaderswe could create some pretty detailed and epic sky scapesfor our games.

 
 

Horizon is a very different kind of game…

【Horizon 则是一款非常不一样的游戏】

 
 


 
 

Horizon trailer (Horizon 预告片)

 
 


 
 

So, from that you could see that we have left the world of Killzonebehind.

【从这你可以看到我们放弃了killzone的世界构造做法】

 
 

【horizon特点】

•Horizon is a vastly open world where you can prettymuch go anywhere that you see, including the tops of mountains.【超大自由世界随意走动,包括山顶】

•Since this is a living real world, we simulate the spinning of the earth by having a time of day cycle.【模拟的昼夜循环系统】

•Weather is part of the environment so it will be changing and evolving as well.【天气系统】

•There’s lots of epic scenery: Mountains, forests, plains, and lakes.【史诗般的风景:山,平原,湖泊,森林】

•Skies are a big part of the landscape of horizon. They make up half of the screen. Skies are also are a very important part of storytelling as well as world building.【天空是非常重要的一个部分,一般都占有了屏幕的一半来显示,也是非常重要的故事推进背景元素。】

 
 


 
 

They are used to tell us where we are, when we are, and they can also be used as thematic devices in storytelling.

【天空可以告诉你在哪,什么时候等信息】

 
 


 
 

For Horizon, we want the player to really experience the world we are building. So we decided to try something bold. We prioritized some goals for our clouds.

【我们希望玩家在我们创造的虚拟世界中有真实的体验,因此我们打算做一些大胆的尝试,我们给clouds列了下面这些目标】

 
 

•Artdirect-able【美术可以直接编辑】

•Realistic Representing multiple cloud types【真实的描述多变的云的形状】

•Integrate with weather【整合天气】

•Evolve in some way【存在演变方式】

•And of course, they needed to be Epic!【美】

 
 


 
 

Realistic CG clouds are not an easy nut to crack. So, before we tried to solve the whole problem of creating a sky full them, we thought it would be good to explore different ways to make and light individual cloud assets.

【realistic CG云并不是一件容易啃的骨头。因此在开始处理这个问题前,我们首先浏览目前的所有云的制作方法。】

 
 


 
 

Our earliestsuccessful modeling approach was to use a custom fluid solver to grow clouds. The results were nice, but this was hard for artists to control if they had not had any fluid simulation experience. Guerrilla is a game studio after all.

【流体模拟的方法:效果不错,但是美术很难去独立控制实现】

 
 


 
 

We ended up modeling clouds from simple shapes,

Voxelizing them and then ?

Running them through our fluid solver ?

Until we got a cloud like shape .

【我们最后做了个云的模型,像飞船】

 
 


 
 

And then we developed a lighting model that we used to pre-compute primary and secondary scattering,

•Ill get into our final lighting model a little later, but the result you see here is computed on the cpuin Houdini in 10 seconds.

【然后我们使用预计算的一二级散射开发了一个cloud模型,这个模型的预渲染花了10秒!】

 
 


 
 

We explored 3 ways to get these cloud assets into game.

【我们探索了三种方式来使得云加入游戏】

 
 

•For the first, we tried to treat our cloud as part of the landscape, literally modeling them as polygons from our fluid simulations and baking the lighting data using spherical harmonics. This only worked for the thick clouds and not whispyones …

【第一种是多边形对象的方式,只适合厚厚的云层和不形变的云】

 
 


 
 

So, we though we should try to enhance the billboard approach to support multiple orientations and times of day . We succeeded but we found that we couldn’t easily re-produce inter cloud shadowing. So…

【billboard的方式,不能处理阴影】

 
 


 
 

•We tried rendering all of our voxel clouds as one cloud set to produce sky domes that could also blend into the atmosphere over depth. Sort of worked.

【尝试把所有的voxel cloud按照深度排序被大气blend,当作一个整体看作是天空穹顶。】

 
 

•At this point we took a step back to evaluate what didn’t work. None of the solutions made the clouds evolve over time. There was not a good way to make clouds pass overhead. And there was high memory usage and overdraw for all methods.

【然后回过头来看,voxel clouds对最终结果没有做出贡献占大部分,pass overhead严重,性能非常不好,不是一种好的选择】

 
 

•So maybe a traditional asset based approach was not the way to go.

【因此传统的方法不是一种好的选择】

 
 


 
 

Well, What about voxel clouds?

OK we are crazy we are actually considering voxel clouds now…

As you can imagine this idea was not very popular with the programmers.

【接下来我们疯了,考虑体素云,一般来说这不是一个好选择,原因如下:】

 
 

Volumetrics are traditionally very expensive

With lots of texture reads

Ray marches

Nested loops

【原因】

 
 

However, there are many proven methods for fast, believable volumetric lighting

There is convincing work to use noise to model clouds . I can refer you to the 2012 Production Volume Rendering course.

Could we solve the expense somehow and benefit from all of the look advantages of volumetrics?

【然而,可以做出令人惊艳的效果,也有一些方法在改进效率。这方法的成败就在于效率的处理能否达到要求。】

 
 


 
 

Our first test was to stack up a bunch of polygons in front of the camera and sample 3d Perlin noise with them. While extremely slow, This was promising, but we want to represent multiple clouds types not just these bandy clouds.

【我们首先测试的方法是在相机前堆了一堆多边形对它们取sample,超级慢,而且我们不希望只有这样的罗圈云。】

 
 


 
 

So we went into Houdini and generated some tiling 3d textures out of the simulated cloud shapes. Using Houdini’s GL extensions, we built a prototype GL shader to develop a cloud system and lighting model.

【然后我们采用Houdini来生成3D纹理,利用 Houdini’s GL extensions来开发一个cloud system和光照模型】

【Houdini软件介绍 https://zh.wikipedia.org/wiki/Houdini

 
 


 
 

In The end, with a LOT of hacks, we got very close to mimicking our reference. However, it all fell apart when we put the clouds in motion. It also took 1 second per frame to compute. For me coming from animated vfx, this was pretty impressive, but my colleagues were still not impressed.

【最终加上其他一些东西我们得到的效果和真实效果已经非常相似,但是在云的运动过程中模块感强烈,而且渲染一张需要一秒,这还是不够】

 
 

So I thought, Instead of explicitly defining clouds with pre-determined shapes, what if we could develop some good noises at lower resolutions that have the characteristics we like and then find a way to blend between them based on a set of rules. There has been previous work like this but none of it came close to our look goals.

【因此,不采用明确的已经预定义的云的形状的组合,而是通过noise产生特征并通过一定的方法混合来获得云的形状】

 
 


 
 

This brings us to the clouds system for horizon. To explain it better I have broken it down into 4 sections: Modeling, Lighting, Rendering and Optimization.

【cloud system工作流分成四个阶段:modeling, lighting, rendering, optimization】

 
 

Before I get into how we modeled the cloud scapes, it would be good to have a basic understanding of what clouds are and how they evolve into different shapes.

【了解一下自然界云的形状的情况】

 
 


 
 

Classifying clouds helped us better communicate what we were talking about and Define where we would draw them.

【这有助于让我们知道把特定的云画在哪里】

The basic cloud types are as follows.【基本的云形状】

•The stratoclouds including stratus, cumulus and stratocumulus【云分类】

•The alto clouds, which are those bandy or puffy clouds above the stratolayer【层云(低)】

•And the cirroclouds those big arcing bands and little puffs in the upper atmosphere.【卷云(中)】

•Finally there is the granddaddy of all cloud types, the Cumulonimbus clouds which go high into the atmosphere.【积雨云(高)】

•For comparison, mount Everest is above 8,000 meters.【设定最高高度8000m】

 
 


 
 

After doing research on cloud types, we had a look into the forces that shape them. The best source we had was a book from 1961 by two meteorologists, called “The Clouds” as creatively as research books from the 60’s were titled. What it lacked in charm it made up for with useful empirical results and concepts that help with modeling a cloud system.

【做云类型的研究之后,我们去看一下塑造clouds的力量。来源是1961年的一本由两位气象学家写的书。它弥补了造型云系相关的概念成果对我们有所帮助。】

 
 

§Density increases at lower temperatures【低温下密度增加】

§Temperature decreases over altitude【海拔下降温度升高】

§High densities precipitate as rain or snow【高密度沉淀为雨雪】

§Wind direction varies over altitude【不同海拔高度的风】

§They rise with heat from the earth【保温作用】

§Dense regions make round shapes as they rise【密度决定形状】

§Light regions diffuse like fog【漫反射性质像雾一样】

§Atmospheric turbulence further distorts clouds.【大气湍流进一步扭曲了云】

 
 

These are all abstractions that are useful when modeling clouds

【这些对于构造云系统非常有用】

 
 


 
 

Our modeling approach uses ray marching to produce clouds.

【我们的模型使用光线跟踪生产云。】

 
 

We march from the camera and sample noises and a set of gradients to define our cloud shapes using a sampler

【我们通过sampler来确定云的形状】

 
 


 
 

In a ray march you use a sampler to…

 
 

Build up an alpha channel….

And calculate lighting

【sampler用来确定alpha值和光照计算】

 
 


 
 

There are many examples of real-time volume clouds on the internet. The usual approach involves drawing them in a height zone above the camera using something called fBm, Fractal Brownian Motion(分形布朗运动). This is done by layering Perlin noises of different frequencies until you get something detailed.

【网络上很多的体素云的例子,大部分是在相机的上半部分采用FBM绘制,就是分层perlin noise直到达到满意效果】

 
 

(pause)

 
 

This noise is then usually combined somehow with a gradient to define a change in cloud density over height

【这种noise通过梯度合并的方式来处理云的密度岁高度变化的问题】

 
 


 
 

This makes some very nice but very procedural looking clouds.

What’s wrong?

There are no larger governing shapes or visual cues as to what is actually going on here. We don’t feel the implied evolution of the clouds from their shapes.

【这样的结果非常的程序化的感觉】

【问题在于没有给与真实的存在于那空间的感觉,我们感受不到云的形状的演变趋势。】

 
 


 
 

By contrast, in this photograph we can tell what is going on here. These clouds are rising like puffs of steam from a factory. Notice the round shapes at the tops and whispyshapes at the bottoms.

【相反,在照片中我们能感受到云的运动方向,还有注意照片中底部和顶部的云的不同】

 
 


 
 

This fBm approach has some nice whispy shapes, but it lacks those bulges and billows that give a sense of motion. We need to take our shader beyond what you would find on something like Shader Toy.

【FBm方法可以带来云的稀疏的分布和形状,但是没有那种跌宕起伏的运动感,这是我们想要解决的问题。】

 
 


 
 

These billows, as Ill call them?

…are packed, sometimes taking on a cauliflower shape.

Since Perlin noise alone doesn’s cut it, we developed our own layered noises.

【云很多的时候,需要这种菜花状,这个perlin noise做不到,我们开发了自己的layered noises】

 
 


 
 

Worley noise was introduced in 1996 by Steven Worley and is often used for caustics and water effects. If it is inverted as you see here:

It makes tightly packed billow shapes.

We layered it like the standard Perlin fBm approach

【Worley noise 是这种紧凑的枕头形状,我们首先把它层次化了】

 
 

Then we used it as an offset to dilate Perlin noise. this allowed us to keep the connectedness of Perlin noise but add some billowy shapes to it.

We referred to this as Perlin-Worley noise

【然后混合Perlin noise做offset,效果如下】

 
 


 
 

In games, it is often best for performance to store noises as tiling 3d textures.

【游戏中一般都是用生成好的3D noise textures,为了性能】

 
 

You want to keep texture reads to a minimum?

And keep the resolutions as small as possible.

In our case we have compressed our noises to?

two 3d textures?

And 1 2d texture.

【分辨率:够用的最小化处理】

 
 


 
 

The first 3d Texture…

 
 

has 4 channels…

it is 128^3 resolution…

The first channel is the Perlin-Worley noise I just described.

The other 3 are Worley noise at increasing frequencies. Like in the standard approach, This 3d texture is used to define the base shape for our clouds.

【第一层情况】

 
 


 
 

Our second 3d texture…

 
 

has 3 channels…

it is 32^3 resolution…

and uses Worley noise at increasing frequencies. This texture is used to add detail to the base cloud shape defined by the first 3d noise.

【第二层:降低分辨率】

 
 


 
 

Our 2D texture…

 
 

has 3 channels…

it is 128^2 resolution…

and uses curl noise. Which is non divergent and is used to fake fluid motion. We use this noise to distort our cloud shapes and add a sense of turbulence.

【2D纹理存储情况】

 
 


 
 

Recall that the standard solution calls for a height gradient to change the noise signal over altitude. Instead of 1, we use…

【回想下前面讲到的网络上的标准方法通过梯度改变noise signal来实现海拔的考虑。我们这边也是这么采用的】

 
 

3 mathematical presets that represent the major low altitude…

cloud types when we blend between them at the sample position.

We also have a value telling us how much cloud coverage we want to have at the sample position. This is a value between zero and 1.

【3表示低空,预设;云形状会根据所在位置混合,这里主要说的应该是高度不同不混合,同时这个值也就决定了云层间的覆盖关系】

 
 


 
 

What we are looking at on the right side of the screen is a view rotated about 30 degrees above the horizon. We will be drawing clouds per the standard approach in a zone above the camera.

【右边是仰角三十度仰看的天空,下面在这视角下绘制云。】

 
 

First, we build a basic cloud shape by sampling our first 3dTexture and multiplying it by our height signal.

【首先我们绘制基本的云的形状通过 sampling 前面的3dtexture 乘上 高度信号,见PPT公式。】

 
 

The next step is to multiply the result by the coverage and reduce density at the bottoms of the clouds.

【然后是乘上coverage来减少云的密度】

 
 


 
 

This ensures that the bottoms will be whispy and it increases the presence of clouds in a more natural way. Remember that density increases over altitude. Now that we have our base cloud shape, we add details.

【这样就有了一个比较自然的基本的云的情况,下面添加细节】

 
 


 
 

The next step is to…

 
 

erode the base cloud shape by subtracting the second 3d texture at the edges of the cloud.

Little tip, If you invert the Worley noise at the base of the clouds you get some nice whispy shapes.

【通过第二层的3D texture来侵蚀云层的形状,小技巧说的是你可以直接取反来做侵蚀效果同样好。】

 
 

We also distort this second noise texture by our 2d curl noise to fake the swirly distortions from atmospheric turbulence as you can see here…

【我们同时使用2D 纹理噪音来模拟大气流动带来的云层扭曲】

 
 


 
 

Here’s that it looks like in game. I’m adjusting the coverage signal to make them thicker and then transitioning between the height gradients for cumulus to stratus.

【游戏中的效果,coverage调整的是云层的厚度,height gradient调整的是高度】

 
 

Now that we have decent stationary clouds we need to start working on making them evolve as part of our weather system.

【现在我们的云本身已经差不多了,我们要把它搞进我们的天气系统】

 
 


 
 

These two controls, cloud coverage and cloud type are a FUNCTION of our weather system.

【控制一:云的覆盖程度】

 
 

There is an additional control for Precipitation that we use to draw rain clouds.

【控制二:降水量值用来控制rain cloud的绘制量】

 
 


 
 

Here in this image you can see a little map down in the lower left corner. This represents the weather settings that drive the clouds over our section of world map. The pinkish white pattern you see is the output from our weather system. Red is coverage, Green is precipitation and blue is cloud type.

【左下角的小图表示当前世界天气驱动的云层设置,您所看到的粉红色相间的花纹是从我们的天气系统的输出。红色是Coverage,绿色是降水,蓝色是云的类型。】

 
 

The weather system modulates these channels with a simulation that progresses during gameplay. The image here has Cumulus rain clouds directly overhead (white) and regular cumulus clouds in the distance. We have controls to bias the simulation to keep things art direct-able in a general sense.

【这个图就是可视化的天气系统和云的渲染的接口】

 
 


 
 

The default condition is a combination of cumulus and stratus clouds. The areas that are more red have less of the blue signal, making them stratus clouds. You can see them in the distance at the center bottom of the image.

【默认情况是积云和层云的组合,图中大红色的区域】

 
 


 
 

The precipitation signal transitions the map from whatever it is to cumulonimbus clouds at 70% coverage

【积雨云覆盖超过7成自动启动降雨信号】

 
 


 
 

The precipitation control not only adjusts clouds but it creates rain effects. In this video I am increasing the chance of precipitation gradually to 100%

【降雨信号同时启动雨的特效】

 
 


 
 

If we increase the wind speed and make sure that there is a chance of rain, we can get Storm clouds rolling in and starting to drop rain on us. This video is sped up, for effect, btw. Ahhh… Nature Sounds.

【增加了风的效果加上下雨,我们获得的是暴风雨的效果】

 
 


 
 

We also use our weather system to make sure that clouds are the horizon are always interesting and poke above mountains.

【我们还保证天边总是有云的】

 
 

We draw the cloudscapes with in a 35,000 meter radius around the player….

and Starting at a distance of 15,000 meters…

we start transitioning to cumulus clouds at around 50% coverage.

【我们绘制一个cloudscapes 半径为35000米绕在用户周围,距离用户15000米的时候开始过渡到50%覆盖率的积云】

 
 


 
 

This ensures that there is always some variety and ‘epicness’ to the clouds on the horizon.

So, as you can see, the weather system produces some nice variation in cloud type and coverage.

【这保证了天边总是有云的而且云的类型也符合自然效果】

 
 


 
 

In the case of the e3 trailer, We overrode the signals from the weather system with custom textures. You can see the corresponding textures for each shot in the lower left corner. We painted custom skies for each shot in this manner.

【e3 trailer上面的例子的做法:自定义右下角云图】

 
 


 
 

So to sum up our modeling approach…

【总结一下自家的方法】

 
 

we follow the standard ray-march/ sampler framework but we build the clouds with two levels of detail

a low frequency cloud base shape and high frequency detail and distortion

Our noises are custom and made from Perlin, Worley and Curl noise

We use a set of presets for each cloud type to control density over height and cloud coverage

These are driven by our weather simulation or by custom textures for use with cut scenes and it is all animated in a given wind direction.

 
 


 
 

Cloud lighting is a very well researched area in computer graphics. The best results tend to come from high numbers of samples. In games, when you ask what the budget will be for lighting clouds, you might very well be told “Zero”. We decided that we would need to examine the current approximation techniques to reproduce the 3 most important lighting effects for us.

【cloud lighting是一个非常好的研究领域,因为可以得到很好的效果,但是大量的sample带来的计算量巨大,需要找到很好的近似方法来应用于游戏这样的real time rendering领域】

 
 


 
 

The directional scattering(散射) or luminous(发光) quality of clouds…

The sliver lining when you look toward the sun through a cloud…

And the dark edges visible on clouds when you look away from the sun.

【解释一下光学效应:云会发生散射从而可以看到云的黑边,透过云看到阳光则会感受到云在发光一样的效果】

 
 

The first two have standard solutions but the third is something we had to solve ourselves.

【实现方案:下面先介绍两种标准解决方案,最后第三种是我们的方案】

 
 


 
 

When light enters a cloud

The majority of the light rays spend their time refracting off of water droplets and ice inside of the cloud before heading to our eyes.

【一束光射入云层到你眼睛之间大部分时间都花在了水滴间的折射】

 
 

(pause)

By the time the light ray finally exits the cloud it could have been out scattered absorbed by the cloud or combined with other light rays in what is called in-scattering.

【最终射出的光线能量集合了射入的去掉散射掉的再加上其他光线散射过来的同一方向的能量】

 
 

In film vfx we can afford to spend time gathering light and accurately reproducing this, but in games we have to use approximations. These three behaviors can be thought of as probabilities and there is a Standard way to approximate the result you would get.

【特效电影就是这么实打实的来算的,但是我们游戏中必须采用近似计算,下面介绍方法。】

 
 


 
 

Beer’s law states that we can determine the amount of light reaching a point based on the optical thickness of the medium that it travels through. With Beers law, we have a basic way to describe the amount of light at a given point in the cloud.

 
 

If we substitute energy for transmittance ad depth in the cloud for thickness, and draw this out you can see that energy exponentially decreases over depth. This forms the foundation of our lighting model.
 

【Beer’s law:揭示的是云层厚度和能量损失的关系,这是我们采用的光照模型的基础】

 
 


 
 

but there is a another component contributing to the light energy at a point. It is the probability of light scattering forward or backward in the cloud. This is responsible for the silver lining in clouds, one of our look goals.

【影响最终结果的还有,云层前后表面的散射】

 
 


 
 

In clouds, there is a higher probability of light scattering forward. This is called Anisotropic scattering.

【光线进入云层时存在 各向异性散射】

 
 

In 1941, the Henyey-Greenstein model was developed to help astronomers with light calculations at galactic scales, but today it is used to reliably reproduce Anisotropy in cloud lighting.

【Henyey-Greenstein model: 最初用于天文学的测量,这里用于云的各向异性的亮度处理】

 
 


 
 

Each time we sample light energy, we multiply it by The Henyey-Greenstein phase function.

【每一时刻我们sample light energy,把它应用于Henyey-Greenstein phase function】

 
 


 
 

Here you can see the result. On the left is Just the beers law portion of our lighting model. On the right we have applied the Henyey-Greenstein phase function. Notice that the clouds are brighter around the sun on the right.

【效果展示:左边只是beer’s law的效果,右边加上Henyey-Greenstein phase function处理后的效果】

 
 


 
 

But we are still missing something important, something that is often forgotten. The dark edges on clouds. This is something that is not as well documented with solutions so we had to do a thought experiment to understand what was going on.

【但是我们还是忘了很多重要的部分。云层的黑边,我们不得不去看看怎么解决】

 
 


 
 

Think back to the random walk of a light ray through a cloud.

【考虑一随机光线怎样通过云层】

 
 

If we compare a point inside of the cloud to one near the surface, the one inside would receive more in scattered light. In other words, Cloud material, if you want to call it that, is a collector for light. The deeper you are in the surface of a cloud, the more potential there is for gathered light from nearby regions until the light begins to attenuate, that is.

【你眼睛接收到的云的一点反射出的能量是集合了大量的光线的反射结果,换句话说,云的材质是光的集合,云层越深这集合就越大】

 
 

This is extremely pronounced in round formations on clouds, so much so that the crevices appear…

【因此才会出现云的黑边的问题,边界的眼睛直线方向上的云的深度比较小。】

 
 

to be lighter that the bulges and edges because they receive a small boost of in-scattered light.

Normally in film, we would take many many samples to gather the contributing light at a point and use a more expensive phase function. You can get this result with brute force. If you were in Magnus Wrenninge’s multiple scattering talk yesterday there was a very good example of how to get this. But in games we have to find a way to approximate this.

【电影里面采用大量的sample直接来模拟得到效果,但是游戏中我们就得想办法来得到近似效果。】

 
 


 
 

A former colleague of mine, Matt Wilson, from Blue Sky, said that there is a similar effect in piles of powdered sugar. So, I’ll refer to this as the powdered sugar look.

【一堆云和一堆糖在这一层面的效果和原理是一致的】

 
 


 
 

Once you understand this effect, you begin to see it everywhere. It cannot be un-seen.

Even in light whispyclouds. The dark gradient is just wider.

【你明白了原理你会发现这效果无处不在】

 
 


 
 

The reason we do not see this effect automatically is because our transmittance function is an approximation and doesn’t take it into account.

【我们原来的模型忽略了这一点原理,因此看不到效果】

 
 

The surface of the cloud is always going to have the same light energy that it receives. Let’s think of this effect as a statistical probability based on depth.

【我们来重新考虑lighting模型】

 
 


 
 

As we go deeper in the cloud, our potential for in scattering increases and more of it will reach our eye.

【云层越深,潜在的散射越强到达眼睛的能量越多】

 
 

If you combine the two functions you get something that describes this?

【我们合并这两个公式会得到什么样的结果呢?】

 
 

Effect as well as the traditional approach.

I am still looking for the Beer’s-Powder approximation method in the ACM digital library and I haven’t found anything mentioned with that name yet.

【我们没有找到相关创新的做法,在学界。就是说混合这两种原理作者是第一个这么做的】

 
 


 
 

Lets visually compare the components of our directional lighting model

The beer’s law component which handles the primary scattering?

The powder sugar effect which produces the dark edges facing the light?

And their combination in our final result.

【比较图中的四种方法的效果】

 
 


 
 

Here you can see what the beer’s law and combined beer’s law and powder effect look like when viewed from the light source. This is a pretty good approximation of our reference.

【混合Beer’s-Powder approximation得到了非常好的效果】

 
 


 
 

In game, it adds a lot of realism to the Thicker clouds and helps sell the scale of the scene.

【在游戏中增加了云的真实感】

 
 


 
 

But we have to remember that this is a view dependent effect. We only see it where our view vector approaches the light vector, so the powder function should account for this gradient as well.

【但是我们还是要记住这是一个基于视点的效果,这种效果适应的是光源直接在当前看得到的范围内且不被遮挡,上图所示的情况则是不适合的】

 
 


 
 

Here is a panning camera view that shows this effect increasing as we look away from the sun.

【效果展示,非常漂亮】

 
 


 
 

The last part of our lighting model is that we artificially darken the rain clouds by increasing the light absorption where they exist.

【最后要提到的是我们对于下雨的云曾加了暗度通过增加云对光的吸收量来实现】

 
 


 
 

So, in review our model has 3 components:

【总结一下模型,包含以下四个原理】

Beer’s Law

Henyen-Greenstein

our powder sugar effect

And Absorption increasing for rain clouds

 
 

 
 


 
 

I have outlined How our sampler is used to model clouds and how our lighting algorithm simulates the lighting effects associated with them. Now I am going to describe how and where we take samples to build an image. And how we integrate our clouds into atmosphere and our time of day cycle.

【接下来介绍怎么取sample来产生image,以及如何运用云在大气模拟中】

 
 


 
 

The first part of rendering with a ray march is deciding where to start. In our situation, Horizon takes place on Earth and as most of you are aware… the earth ….. Is round.

The gases that make up our atmosphere wrap around the earth and clouds exists in different layers of the atmosphere.

【大气层中由于气体的原因,云层分为好几种】

 
 


 
 

When you are on a “flat” surface such as the ocean, you can clearly see how the curvature of the earth causes clouds to descend into the horizon.

【当你在一个”平”的表面,如海洋,你可以清楚地看到地球的曲率如何使云彩降入地平线。】

 
 


 
 

For the purposes of our game we divide the clouds into two types in this spherical atmosphere.

 
 

•The low altitude volumetric stratoclass clouds between 1500 and 4000 meters…

•and the high altitude 2D alto and cirroclass clouds above 4000 meters. The upper level clouds are not very thick so this is a good area to reduce expense of the shaderby making them scrolling textures instead of multiple samples in the ray march.

 
 

【游戏里面我们将大气层分为两部分,按照高度区分,两部分分别存在不同类型的云层,超过4km的部分的云层忽略不模拟】

 
 


 
 

By ray marching through spherical atmosphere we can?

ensure that clouds properly descend into the horizon.

It also means we can force the scale of the scene by shrinking the radius of the atmosphere.

【如何模拟光线通过球形的大气层来确保云层远处看上去会降入地平线。这意味着我们要调整一下场景比例来做到】

 
 


 
 

In our situation we do not want to do any work or any expensive work where we don’t need to. So instead of sampling every point along the ray, we use our samplers two levels of detail as a way to do cheaper work until we actually hit a cloud.

【我们不希望做任何无用功来提高性能,因此我们不是sample光线的每一个点,而是采用了2层的LOD来降低工作量】

 
 


 
 

Recall that the sampler has a low detail noise that make as basic cloud shape

And a high detail noise that adds the realistic detail we need.

The high detail noise is always applied as an erosion from the edge of the base cloud shape.

【首先sample低精度的noise来获得基本的云的形状,然后sample高精度的noise来获得真实感的云,见上图所示。高精度的noise用来侵蚀已建好的低精度的云层。】

 
 


 
 

This means that we only need to do the high detail noise and all of its associated instructions where the low detail sample returns a non zero result.

This has the effect of producing an isosurface that surrounds the area that our cloud will be that could be.

【跟据体素的等值面的概念,我们的高精度处理只需要发生下0值也就是体素表面就行,节省了大量的计算】

 
 


 
 

So, when we take samples through the atmosphere, we do these cheaper samples at a larger step size until we hit a cloud isosurface. Then we switch to full samples with the high detail noise and all of its associated instructions. To make sure that we do not miss any high res samples, we always take a step backward before switching to high detail samples.

【因此在取sample的时候我们首先在低精度的场景下做光线跟踪,只需要在特定的部分采用高精度sample来光线跟踪处理,而且高精度的部分可以采用独立线程来做,确保帧率】

 
 


 
 

Once the alpha of the image reaches 1 we don’t need to keep sampling so we stop the march early.

【一旦alpha值到达1,则停止光线继续传播】

 
 


 
 

If we don’t reach an alpha of one we have another optimization.

After several consecutive samples that return zero density, we switch back to the cheap march behavior until we hit something again or reach the top of the cloud layer.

【如果alpha值到不了1,我们切换回低精度的光线跟踪,知道光线射出大气层停止追踪。】

 
 


 
 

Because of the fact that the ray length increases as we look toward the horizon, we start with

an initial potential 64 samples and end with a potential 128 at the horizon. I say potential because of the optimizations which can cause the march to exit early. And we really hope they do.

This is how we take the samples to build up the alpha channel of our image. To calculate light intensity we need to take more samples.

【因为从眼睛发出的光穿越在球形大气的距离不一样,我们设置了64-128个sample的处理范围】

 
 


 
 

Normally what you do in a ray march like this is to take samples toward the light source, plug the sum into your lighting equation and then attenuate this value using the alpha channel until you hopefully exit the march early because your alpha has reached 1.

【正常境况下光线追踪的做法就是眼睛发出的光线到达光源的路径能量总和,1为上限】

 
 


 
 

In our approach, we sample 6 times in a cone toward the sun. This smooth’s the banding we would normally get with 6 simples and weights our lighting function with neighboring density values, which creates a nice ambient effect. The last sample is placed far away from the rest in order to capture shadows cast by distant clouds.

【我们的方法里面,每一次我们在朝向光源的一个椎体范围内sample6次作为一次折射结果。这种方式可以很好的表现环境光】

 
 


 
 

Here you can see what our clouds look like with just alpha samples with our 5 cone samples for lighting and the long distance cone sample.

To improve performance of these light samples, we switched to sampling the cheap version of our shader once the alpha of the image reached 0.3. , this made the shader 2x faster

【这里你可以看到的云就是上面那种alpha椎体sample方式的结果。为了提高性能,当alpha值达到0.3后,我们完全采用粗粒度的sample方式。】

 
 


 
 

The lighting samples replace the lower case d, or depth in the beers law portion of our lighting model. This energy value is then attenuated(衰减) by the depth of the sample in the cloud to produce the image as per the standard volumetric ray-marching approach.

【能量公式,我们改掉了bear’s law的部分的能量实现方式】

 
 


 
 

The last step of our ray march was to sample the 2d cloud textures for the high altitude clouds

【光线追踪的最后一步就是纹理的采用】

 
 


 
 

These were a collection of the various types of cirrus and alto clouds that were tiling and scrolling at different speeds and directions above the volumetric clouds.

【我们采用了一系列的云层纹理,他们之间的区别在于tiling,scroling的速度和方向不同】

 
 


 
 

In reality light rays of different frequencies are mixing in a cloud producing very beautiful color effects. Since we live in a world of approximations, we had to base cloud colors on some logical assumptions.

We color our clouds based on the following model:

【现实中光线的频率不同带来的混合会出现漂亮的颜色特效,我们通过逻辑上的模拟来实现,我们的云层的颜色基于下面这些模块】

 
 

Ambient sky contribution increases over height

Direct lighting would be dominated by the sun color

Atmosphere would occlude clouds over depth.

 
 

We add up our ambient and direct components and attenuate to the atmosphere color based on the depth channel.

【上面三种结果相加混合】

 
 


 
 

Now, you can change the time of day in the game and the lighting and colors update automatically. This means no pre-baking and our unique memory usage for the entire sky is limited to the cost of 2 3d textures and 1 2d texture instead of dozens of billboards or sky domes.

【现在你可以根据游戏中的时间来调整大气的颜色亮度了,这里没有采用预烘焙的方式节约了大量的存储空间】

 
 


 
 

To sum up what makes our rendering approach unique:

【总结一下渲染方面用到的方法特点】

 
 

Sampler does “heap” work unless it is potentially in a cloud

64-128 potential march samples, 6 light samples per march in a cone, when we are potentially in a cloud.

Light samples switch from full to cheap at a certain depth

 
 

 
 


 
 

The approach that I have described so far costs around 20 milliseconds.

(pause for laughter)

Which means it is pretty but, it is not fast enough to be included in our game. My co-developer and mentor on this, Nathan Vos, Had the idea that…

【到上面为止每一帧这一部分的渲染时长还是在20毫秒,作为游戏这还不够快】

 
 


 
 

Every frame we could use a quarter res buffer to update 1 out of 16 pixels for each 4×4 pixel block with in our final image.

We reproject the previous frame to ensure we have something persistent.

【把我们最终的画面的4*4个pixel组成一个block,每次更新1/16,其他的重用上一帧的】

 
 


 
 

…and where we could not reproject, like the edge of the screen, We substitute the result from one of the low res buffers.

Nathan’s idea made the shader10x faster or more when we render this at half res and use filters to upscale it.

It is pretty much the whole reason we are able to put this in our game. Because of this our target performance is around 2 milliseconds, most of that coming from the number of instructions.

【这样足够快了,最终渲染时间在2ms左右】

 
 


 
 

In review we feel that

We have largely achieved our initial goals. This is still a work in progress as there is still time left in the production cycle so we hope to improve performance and direct-ability a bit more. We’re also still working on our atmospheric model and weather system and we will be sharing more about this work in the future on our website and at future conferences.

【回头看我们算是达到了目标,期间最大的问题就是性能处理,我们将继续优化以及将更多的细节分享给大家】

 
 

All of this was captured on a playstation4

And this solution was written in PSSL and C++

 
 


 
 

A number of sources were utilized in the development of this system. I have listed them here.

I would like to thank My co-developer, Nathan vosmost of all

【这里涉及到的资源分享给大家】

 
 

Also some other Guerrillas..

Elco–weather system and general help with transition to games

Michal –supervising the shader development with me and Nathan

Jan Bart, -for keeping us on target with our look goals

Marijn–for allowing me the time in the fxbudget to work on this and for his guidance

Maarten van der Gaagfor some optimization ideas

Felix van den Bergh for slaving away at making polygon clouds and voxel clouds in the early days

Vlad Lapotin, for his work testing out spherical harmonics

And to HermenHulst, manager of Guerrilla for hiring me and for allowing us the resources and time to properly solve this problem for real-time.

 
 


 
 

Are there any questions?

 
 


 
 

Peace out.

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

SIGGRAPH 15 – Physically Based and Unified Volumetric Rendering in Frostbite

作者:

Sebastien Hillaire – Electronic Arts / frostbite

sebastien.hillaire@frostbite.com

https://twitter.com/SebHillaire

 
 

 
 

  • introduction

 
 

Physically based rendering in Frostbite


基于物理渲染的效果非常好!

 
 

Volumetric rendering in Frostbite was limited

  • Global distance/height fog
  • Screen space light shafts
  • Particles

体素渲染在这里还是受到限制的,主要受限于这三点


 
 

 
 

Real-life volumetric 真实的体素


我们期望做到的就是自然界中的这些 云与大气层,雾,光线散射等效果

 
 

 
 

  • Related Work

 
 

Billboards

 
 

Analytic fog [Wenzel07]

Analytic light scattering(散射) [Miles]

特点:Fast,Not shadowed,Only homogeneous media

http://blog.mmacklin.com/2010/05/29/in-scattering-demo/

http://research.microsoft.com/en-us/um/people/johnsny/papers/fogshop-pg.pdf

http://indus3.org/atmospheric-effects-in-games/

 
 


 
 

Screen space light shaft 屏幕空间的光轴

  • Post process [Mitchell07]
  • Epipolar sampling [Engelhardt10]

特点

  • High quality
  • Sun/sky needs to be visible on screen
  • Only homogeneous media 均匀介质
  • Can go for Epipolar sampling but this won’t save the day

 
 


 
 

Splatting(泼溅)

  • Light volumes
    • [Valliant14][Glatzel14][Hillaire14]
  • Emissive volumes [Lagarde13]

This can result in high quality scattering but usually it does not match the participating media of the scene. (这种方法已经很常用了,但是相对独立处理)


 
 


 
 

 
 

Volumetric fog [Wronski14] 体积雾

  • Sun and local lights
  • Heterogeneous media

allowing spatially varying participating media and local lights to scatter.

spatially 参与 (scatter)散射,此做法与这边作者的想法一致

However it did not seem really physically based at the time and some features we wanted were missing.

缺点是不是很符合物理规则

 
 


 
 

 
 

  • Scope and motivation

 
 

Increase visual quality and give more freedom to art direction!(更好的视觉效果)

 
 

Physically based volumetric rendering (物理)

  • Meaningful material parameters
  • Decouple(去耦合) material from lighting
  • Coherent(一致性) results

We want it to be physically based: this means that participating media materials are decoupled from the light sources (e.g. no scattering colour on the light entities). Media parameters are also a meaningful set of parameters. With this we should get more coherent results that are easier to control and understand.

 
 

Unified volumetric interactions(交互)

  • Lighting + regular and volumetric shadows
  • Interaction with opaque, transparent and particles

Also, because there are several entities interacting with volumetric in Frostbite (fog, particles, opaque&transparent surfaces, etc). We also want to unify the way we deal with that to not have X methods for X types of interaction.

 
 


 
 

This video gives you an overview of what we got from this work: lights that generate scattering according to the participating media, volumetric shadow, local fog volumes, etc.

And I will show you now how we achieve it.

先秀结果(视频见投影片)


 
 

 
 

 
 

  • Volumetric rendering

 
 

  • Single Scattering

 
 

As of today we restrict ourselves to single scattering when rendering volumetric. This is already challenging to get right. (单看一条)

 
 

When a light surface interact with a surface, it is possible to evaluate the amount of light bounce to the camera by evaluating for example a BRDF. But in the presence of participating media, things get more complex. (一条光线与物理世界的交互是很复杂的)

 
 

  1. You have to take into account transmittance when the light is traveling through the media(考虑光源到物体的传输介质影响)
  2. Then you need to integrate the scattered light along the view ray by taking many samples(物体表面整合散射过来的光)
  3. For each of these samples, you also need to take into account transmittance to the view point(考虑光从物体到相机的传输介质的影响)
  4. You also need to integrate the scattered light at each position(相机各个位置收集所有散射结果)
  5. And take into account phase function, regular shadow map (opaque objects) and volumetric shadow map (participating media and other volumetric entity)(考虑相位函数,普通阴影贴图(不透明的物体)和体积阴影贴图(与会媒体和其他体积实体))

 
 

 
 


 
 


 
 

公式里面存在两个积分标识的就是上面2,4两条解释的散射整合。

求和表示的是sample光线

 
 

  • Clip Space Volumes

 
 

Frustum aligned 3D textures [Wronski14]

  • Frustum voxel in world space => Froxel J

As in Wronski, All our volumes are 3d textures that are clip space aligned (such voxels become Froxels in world space, Credit Alex Evans and Sony ATG J, see Learning from Failure: a Survey of Promising, Unconventional and Mostly Abandoned Renderers for ‘Dreams PS4′, a Geometrically Dense, Painterly UGC Game’, Advances in Real-Time Rendering course, SIGGRAPH 2015).

 
 

Note: Frostbite is a tiled-based deferred lighting(平铺的延迟光照)

  • 16×16 tiles with culled light lists

 
 

Align volume tiles on light tiles

  • Reuse per tile culled light list
  • Volume tiles can be smaller (8×8, 4×4, etc.)
  • Careful correction for resolution integer division

 
 

This volume is also aligned with our screen light tiles. This is because we are reusing the forward light tile list culling result to accelerate the scattered light evaluation (remember, Frostbite is a tile based deferred lighting engine).

 
 

Our volume tiles in screen space can be smaller than the light tiles (which are 16×16 pixels).

 
 

By default we use

Depth resolution of 64

8×8 volume tiles

 
 

720p requires 160x90x64 (~7mb per rgbaF16 texture)

1080p requires 240x135x64 (~15mb per rgbaF16 texture)

 
 


 
 

 
 

  • Data flow

 
 


 
 

This is an overview of our data flow.

We are using clip space volumes(使用裁剪空间体素) to store the data at different stages of our pipeline.

 
 

We have material properties(材料特性) which are first voxelised from participating media entities.

 
 

Then using light sources of our scene(场景光源) and this material property volume(材料特性体素) we can generate scattered light data per froxel. This data can be temporally upsampled to increase the quality. Finally, we have an integration(积分) step that prepares the data for rendering.

 
 

  1. Participating media material definition (对应图上第一部分)

 
 

Follow the theory [PBR]

  • Absorption 𝝈𝒂 (m^-1) 【吸收】

Absorption describing the amount of light absorbed by the media over a certain path length

  • Scattering 𝝈𝒔 (m^-1) 【散射】

Scattering describing the amount of light scattered over a certain path length

  • Phase 𝒈 【相位】

And a single lobe phase function describing how the light bounces on particles (uniformly, forward scattering, etc.). It is based on Henyey-Greenstein (and you can use the Schlick approximation).

  • Emissive 𝝈𝒆 (irradiance.m-1) 【自发光】

Emissive describing emitted light

  • Extinction 𝝈𝒕 = 𝝈𝒔 + 𝝈𝒂 【消失】
  • Albedo 𝛒 = 𝝈𝒔 / 𝝈𝒕 【返照光】

 
 

Artists can author {absorption, scattering} or {albedo, extinction}

  • Train your artists! Important for them to understand their meaning!

As with every physically based component, it is very important for artists to understand them so take the time to educate them.

(美术需要相关物理知识!)

 
 


 
 

Participating Media(PM) sources

  • Depth fog
  • Height fog
  • Local fog volumes
    • With or W/o density textures

 
 

Depth/height fog and local fog volumes are entities(实体的) that can be voxelized. You can see here local fog volumes as plain or with varying density(密度) according to a density texture.

 
 

下面解释 数据结构及存储。

 
 

Voxelize PM properties into V-Buffer

  • Add Scattering, Emissive and
    Extinction
  • Average Phase g (no multi lobe)
  • Wavelength independent 𝝈𝒕 (for now)

 
 

We voxelize(体素化) them into a Vbuffer analogous(类似的) to screen Gbuffer but in Volume (clip space). We basically add all the material parameters together since they are linear. Except the phase function which is averaged. We only also only consider a single lobe for now according to the HG phase function.

 
 

We have deliberately(故意) chosen to go with wavelength independent(波长无关) extinction(消失) to have cheaper volumes (material, lighting, shadows). But it would be very easy to extend if necessary at some point.

 
 

Supporting emissive is an advantage for artist to position local fog volume that emit light as scattering would do but that do not match local light. This can be used for cheap ambient lighting. (自发光是可选项)

 
 

 
 


 
 

V-Buffer (per Froxel data)

  

  

  

Format

Scattering R

Scattering G

Scattering B

Extinction

RGBA16F

Emissive R

Emissive G

Emissive B

Phase (g)

RGBA16F

 
 

 
 

  1. 1 Froxel integration (对应图上第二部分)

 
 

Per froxel

  • Sample PM properties data
  • Evaluate
    • Scattered(稀疏的) light 𝑳𝒔𝒄𝒂𝒕(𝒙𝒕,𝝎𝒐)
    • Extinction

 
 

For each froxel, one thread will be in charge of gathering scattered light and extinction.

 
 

Extinction is simply copied over from the material. You will see later why this is important for visual quality in the final stage (to use extinction instead of transmittance for energy conservative scattering). Extinction is also linear so it will be better to temporally integrate it instead of the non linear transmittance value. (线性的 Extinction就够了)

 
 

Scattered light:

  • 1 sample per froxel
  • Integrate all light sources: indirect light + sun + local lights

 
 


 
 

Sun/Ambient/Emissive

 
 

Indirect light on local fog volume

  • From Frostbite diffuse SH light probe
    • 1 probe(探测) at volume centre
    • Integrate w.r.t. phase function as a SH cosine lobe [Wronski14]

 
 

Then we integrate the scattered light. One sample per froxel.

 
 

We first integrate ambient the same way as Wronski. Frostbite allows us to sample diffuse SH light probes. We use one per local fog volume positioned at their centre.

 
 

We also integrate the sun light according to our cascaded shadow maps. We could use exponential(指数) shadow maps but we do not as our temporal up-sampling is enough to soften the result.

 
 

You can easily notice the heterogeneous nature of the local fog shown here.

 
 


 
 

Local lights

  • Reuse tiled-lighting code
  • Use forward tile light list post-culling
  • No scattering? skip local lights

 
 

We also integrate local lights. And we re-use the tile culling(平铺剔除) result to only take into account lights visible within each tile.

One good optimisation is to skip it all if you do not have any scattering possible according to your material properties.

 
 

Shadows

  • Regular shadow maps
  • Volumetric shadow maps

 
 

Each of these lights can also sample their associated shadow maps. We support regular shadow maps and also volumetric shadow maps (described later).

 
 


 
 

  1. 2 Temporal volumetric integration (对应图上第二部分)

 
 

问题:

 
 

scattering/extinction sample per frame

  • Under sampling with very strong material
  • Aliasing under camera motion
  • Shadows make it worse

 
 

As I said, we are only using a single sample per froxel.

 
 

aliasing (下面两个视频见投影片,很明显的aliasing)

This can unfortunately result in very strong aliasing for very thick participating media and when integrating the local light contribution.

 
 


 
 

You can also notice it in the video, as well as very strong aliasing of the shadow coming from the tree.

 
 


 
 

解决:Temporal integration(时间积分)

To mitigate these issues, we temporally integrate our frame result with the one of previous frame. (well know, also used by Karis last year for TAA).

 
 

To achieve this,

we jitter our samples per frame uniformly along the view ray

The material and scattered light samples are jittered using the same offset (to soften evaluated material and scattered light)

Integrate (集成) each frame according to an exponential(指数) moving average

And we ignore previous result in case no history sample is available (out of previous frustum)

 
 

Jittered samples (Halton)

Same offset for all samples along view ray

Jitter scattering AND material samples in sync

 
 

Re-project previous scattering/extinction

5% Blend current with previous

Exponential moving average [Karis14]

Out of Frustum: skip history

 
 


 
 

效果很明显,先投影片视频。

 
 

仍然存在问题:

This is great and promising but there are several issues remaining:

 
 

Local fog volume and lights will leave trails when moving

One could use local fog volumes motion stored in a buffer the same way as we do in screenspace for motion blur

But what do we do when two volumes intersect? This is the same problem as deep compositing

For lighting, we could use neighbour colour clamping but this will not solve the problem entirely

 
 

This is an exciting and challenging R&D area for the future and I’ll be happy to discuss about it with you if you have some ideas J

 
 

  1. Final integration

 
 

积分

Integrate froxel {scattering, extinction} along view ray

  • Solves {𝑳𝒊(𝒙,𝝎𝒐), 𝑻𝒓(𝒙,𝒙𝒔)} for each froxel at position 𝒙𝒔

 
 

We basically accumulate near to far scattering according to transmittance. This will solve the integrated scattered light and transmittance along the view and that for each froxel.

 
 

代码示例

One could use the code sample shown here: accumulate scattering and then transmittance for the next froxel, and this slice by slice. However, that is completely wrong. Indeed there is a dependency on the accumScatteringTransmitance.a value (transmittance). Should we update transmittance of scattering first?

 
 


 
 

Final

 
 

Non energy conservative integration: (非能量守恒的集成)

 
 

You can see here multiple volumes with increasing scattering properties. It is easy to understand that integrating scattering and then transmittance is not energy conservative.

 
 


 
 

We could reverse the order of operations. You can see that we get somewhat get back the correct albedo one would expect but it is overall too dark and temporally integrating that is definitely not helping here.

 
 


 
 

So how to improve this? We know we have one light and one extinction sample.

 
 

We can keep the light sample: it is expensive to evaluate and good enough to assume it constant on along the view ray inside each depth slice.

 
 

But the single transmittance is completely wrong. The transmittance should in fact be 0 at the near interface of the depth layer and exp(-mu_t d) at the far interface of the depth slice of width d.

 
 

What we do to solve this is integrate scattered light analytically according to the transmittance in each point on the view ray range within the slice. One can easily find that the analytical integration of constant scattered light over a definite range according to one extinction sample can be reduced this equation.

Using this, we finally get consistent lighting result for scattering and this with respect to our single extinction sample (as you can see on the bottom picture).

 
 

  • Single scattered light sample 𝑆=𝑳𝒔𝒄𝒂𝒕(𝒙𝒕,𝝎𝒐) OK
  • Single transmittance sample 𝑻𝒓(𝒙,𝒙𝒔) NOT OK

 
 

è Integrate lighting w.r.t. transmittance over froxel depth D


 
 


 
 

Also improves with volumetric shadows

You can also see that this fixes the light leaking we noticed sometimes for relatively large depth slices and strongly scattering media even when volumetric shadow are enabled.

 
 


 
 

Once we have that final integrated buffer, we can apply it on everything in our scene during the sky rendering pass. As it contains scattered light reaching the camera and transmittance, it is easy to apply it as a pre-multiplied colour-alpha on everything.

 
 

For efficiency, it is applied per vertex on transparents but we are thinking of switching this to per pixel for better quality.

 
 

  • {𝑳𝒊(𝒙,𝝎𝒐), 𝑻𝒓(𝒙,𝒙𝒔)} Similar to pre-multiplied color/alpha
  • Applied on opaque surfaces per pixel
  • Evaluated on transparent surfaces per vertex, applied per pixel

 
 


 
 

 
 

Result validation

 
 

Our target is to get physically based results. As such, we have compared our results against the physically based path tracer called Mitsuba. We constrained Mitsuba to single scattering and to use the same exposure, etc. as our example scenes.

 
 

Compare results to references from Mitsuba

  • Physically based path tracer
  • Same conditions: single scattering only, exposure, etc.

 
 

The first scene I am going to show you is a thick participating media layer with a light above and then into it.

 
 


 
 

You can see here the frostbite render on top and Mitsuba render at the bottom. You can also see the scene with a gradient applied to it. It is easy to see that our result matches, you can also recognize the triangle shape of scattered light when the point lights is within the medium.

 
 

This is a difficult case when participating media is non uniform and thick due to our discretisation of volumetric shadows and material representation. So you can see some small differences. But overall, it matches and we are happy with these first results and improve them in the future.

 
 


 
 

This is another example showing very good match for an HG phase function with g=0 and g=0,9 (strong forward scattering).

 
 


 
 

Performance

 
 

Sun + shadow cascade

14 point lights

  • 2 with regular & volumetric shadows

6 local fog volumes

  • All with density textures

 
 

PS4, 900p

 
 

Volume tile resolution

8×8

16×16

PM Material voxelization

0.45 ms

0.15 ms

Light scattering

2.00 ms

0.50 ms

Final accumulation

0.40 ms

0.08 ms

Application (Fog pass)

+0.1 ms

+0.1 ms

Total

2.95 ms

0.83 ms

 
 

Light scattering components

8×8

Local lights

1.1 ms

+Sun scattering

+0.5 ms

+Temporal integration

+0.4 ms

 
 

You can see that the performance varies a lot depending on what you have enabled and the resolution of the clip space volumes.

 
 

This shows that it will be important to carefully plan what are the needs of you game and different scenes. Maybe one could also bake static scenes scattering and use the emissive channel to represent the scattered light for an even faster rendering of complex volumetric lighting.

 
 

 
 

  • Volumetric shadows

 
 

Volumetric shadow maps

 
 

We also support volumetric shadow maps (shadow resulting from voxelized volumetric entities in our scene)

 
 

To this aim, we went for a simple and fast solution

 
 

  • We first define a 3 levels cascaded clip map volume following and containing the camera.(定义三个跟随相机的体)
    • With tweakable per level voxel size and world space snapping
  • This volume contains all our participating media entities voxelized again within it (required for out of view shadow caster, clip space volume would not be enough)
  • A volumetric shadow map is defined as a 3D texture (assigned to a light) that stores transmittance
    • Transmittance is evaluated by ray marching the extinction volume
    • Projection is chosen as a best fit for the light type (e.g. frustum for spot light)
  • Our volumetric shadow maps are stored into an atlas to only have to bind a single texture (with uv scale and bias) when using them.

 
 


 
 

Volumetric shadow maps are entirely part of our shared lighting pipeline and shader code.

 
 

Part of our common light shadow system

  • Opaque
  • Particles
  • Participating media

 
 

It is sampled for each light having it enabled and applied on everything in the scene (particles, opaque surfaces, participating media) as visible on this video.

 
 

(这边可以看PPt效果视频)

 
 

Another bonus is that we also voxelize our particles.

 
 

We have tried many voxelization method. Point and its blurred version but this was just too noisy. Our default voxelization method is trilinear(三线性). You can see the shadow is very soft and there is no popping(抛出) visible.

 
 

We also have a high quality voxelization where all threads write all the voxels contained within the particle sphere. A bit brute force for now but it works when needed.

 
 

You can see the result of volumetric shadows from particle onto participating media in the last video.

 
 

(See bonus slides for more details)

 
 


 
 

Quality: PS4

 
 

Ray marching of 323 volumetric shadow maps

Spot light:         

0.04 ms

Point light:         

0.14 ms

 
 

1k particles voxelization

Default quality:         

0.03 ms

High quality:         

0.25 ms

 
 

Point lights are more expensive than spot lights because spot lights are integrated slice by slice whereas a full raytrace is done for each point light shadow voxels. We have ideas to fix that in the near future.

 
 

Default particle voxelization is definitely cheap for 1K particles.

 
 

  • More volumetric rendering in Frostbite

 
 

Particle/Sun interaction

 
 

  • High quality scattering and self-shadowing for sun/particles interactions
  • Fourier opacity Maps [Jansen10]
  • Used in production now

 
 


 
 

Our translucent(半透) shadows in Frostbite (see Andersson11) allows particles to cast shadows on opaque surfaces but not on themselves. This technique also did not support scattering.

 
 

We have added that support in frostbite by using Fourier opacity mapping. This allows us to have some very high quality coloured shadowing, scattering resulting in sharp silver lining visual effects as you can see on this screenshots and cloud video.

 
 

This is one special case for the sun (non unified) but it was needed to get that extra bit of quality were needed for the special case of the sun which requires special attention.

 
 

Physically-based sky/atmosphere

 
 

  • Improved from [Elek09] (Simpler but faster than [Bruneton08])
  • Collaboration between Frostbite, Ghost and DICE teams.
  • In production: Mirror’s Edge Catalyst, Need for Speed and Mass Effect Andromeda

 
 


 
 

We also have added support for physically based sky and atmosphere scattering simulation last year. This was a fruitful collaboration between Frostbite and Ghost and DICE game teams (Mainly developed by Edvard Sandberg and Gustav Bodare at Ghost). Now it is used in production by lots games such as Mirror’s Edge or Mass Effect Andromeda.

 
 

It is an improved version of Elek’s paper which is simpler and faster than Bruneton. I unfortunately have no time to dive into details in this presentation.

 
 

But in the comment I have time J. Basically, the lighting artist would define the atmosphere properties and the light scattering and sky rendering will automatically adapt to the sun position. When the atmosphere is changed, we need to update our pre-computed lookup tables and this can be distributed over several frame to limit the evaluation impact on GPU.

 
 

  • Conclusion

 
 

Physically-based volumetric rendering framework used for all games powered by Frostbite in the future

 
 

Physically based volumetric rendering

  • Participating media material definition
  • Lighting and shadowing interactions

 
 

A more unified volumetric rendering system

  • Handles many interactions
    • Participating media, volumetric shadows, particles, opaque surfaces, etc.

 
 

Future work

 
 

Improved participating media rendering

  • Phase function integral w.r.t. area lights solid angle
  • Inclusion in reflection views
  • Graph based material definition, GPU simulation, Streaming
  • Better temporal integration! Any ideas?
  • Sun volumetric shadow
  • Transparent shadows from transparent surfaces?

 
 

Optimisations

  • V-Buffer packing
  • Particles voxelization
  • Volumetric shadow maps generation
  • How to scale to 4k screens efficiently

 
 

For further discussions

 
 

sebastien.hillaire@frostbite.com

https://twitter.com/SebHillaire

 
 

 
 

References

 
 

[Lagarde & de Rousiers 2014] Moving Frostbite to PBR, SIGGRAPH 2014.

[PBR] Physically Based Rendering book, http://www.pbrt.org/.

[Wenzel07] Real time atmospheric effects in game revisited, GDC 2007.

[Mitchell07] Volumetric Light Scattering as a Post-Process, GPU Gems 3, 2007.

[Andersson11] Shiny PC Graphics in Battlefield 3, GeForceLan, 2011.

[Engelhardt10] Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering, I3D 2010.

[Miles] Blog post http://blog.mmacklin.com/tag/fog-volumes/

[Valliant14] Volumetric Light Effects in Killzone Shadow Fall, SIGGRAPH 2014.

[Glatzel14] Volumetric Lighting for Many Lights in Lords of the Fallen, Digital Dragons 2014.

[Hillaire14] Volumetric lights demo

[Lagarde13] Lagarde and Harduin, The art and rendering of Remember Me, GDC 2013.

[Wronski14] Volumetric fog: unified compute shader based solution to atmospheric solution, SIGGRAPH 2014.

[Karis14] High Quality Temporal Super Sampling, SIGGRAPH 2014.

[Jansen10] Fourier Opacity Mapping, I3D 2010.

[Salvi10] Adaptive Volumetric Shadow Maps, ESR 2010.

[Elek09] Rendering Parametrizable Planetary Atmospheres with Multiple Scattering in Real-time, CESCG 2009.

[Bruneton08] Precomputed Atmospheric scattering, EGSR 2008.