Tag: RayTracing

RayTracing – Adding Reflection and Refraction

The other advantage of ray-tracing is that, by extending the idea of ray propagation, we can very easily simulate effects like reflection and refraction, both of which are handy in simulating glass materials or mirror surfaces. In a 1979 paper entitled “An Improved Illumination Model for Shaded Display”, Turner Whitted was the first to describe how to extend Appel’s ray-tracing algorithm for more advanced rendering. Whitted’s idea extended Appel’s model of shooting rays to incorporate computations for both reflection and refraction.【扩展光线追踪,其模拟的方法很容易模拟反射折射】

 
 

In optics, reflection and refraction are well known phenomena. Although a whole later lesson is dedicated to reflection and refraction, we will look quickly at what is needed to simulate them. We will take the example of a glass ball, an object which has both refractive and reflective properties. As long as we know the direction of the ray intersecting the ball, it is easy to compute what happens to it. Both reflection and refraction directions are based on the normal at point of intersection and the direction of the incoming ray (the primary ray). To compute the refraction direction we also need to specify the index of refraction of the material. Although we said earlier that rays travel on a straight line, we can visualize refraction as the ray being bent. When a photon hits an object of a different medium (and thus a different index of refraction), its direction changes. The science of this will be discussed in more depth later. As long as we remember that these two effects depend of the normal vector and the incoming ray direction, and that refraction depends of the refractive index of the material we are ready to move on.【我们举个玻璃球的例子来看折射反射,一根光线射到玻璃球,折返射的方向都可根据物理规则知道,如下图】

 
 

 
 

Similarly, we must also be aware of the fact that an object like a glass ball is reflective and refractive at the same time. We need to compute both for a given point on the surface, but how do we mix them together? Do we take 50% of the reflection result and mix it with 50% of the refraction result? Unfortunately, it is more complicated than that. The mixing of values is dependent upon the angle between primary ray (or viewing direction) and both the normal of the object and the index of refraction. Fortunately for us, however, there is an equation that calculates precisely how each should be mixed. This equation is know as the Fresnel equation. To remain concise, all we need to know, for now, is that it exists and it will be useful in the future in determining the mixing values.【那么我们如何混合折返射,混合的比例和入射光线的角度相关,就是Fresnel函数表示的】

 
 

So let’s recap. How does the Whitted algorithm work? We shoot a primary ray from the eye and the closest intersection (if any) with objects in the scene. If the ray hits an object which is not a diffuse or opaque object, we must do extra computational work. To compute the resulting color at that point on, say for example, the glass ball, you need to compute the reflection color and the refraction color and mix them together. Remember, we do that in three steps. Compute the reflection color, compute the refraction color, and then apply the Fresnel equation.【算法流程就是,从眼睛发出射线射到第一个物体,如果物体不是不透明的,就需要拆分成折返射光线继续参与光线追踪,最后的结果做颜色比例混合。下面就是再说这三步】

 
 

  1. First we compute the reflection direction. For that we need two items: the normal at the point of intersection and the primary ray’s direction. Once we obtain the reflection direction, we shoot a new ray in that direction. Going back to our old example, let’s say the reflection ray hits the red sphere. Using Appel’s algorithm, we find out how much light reaches that point on the red sphere by shooting a shadow ray to the light. That obtains a color (black if it is shadowed) which is then multiplied by the light intensity and returned to the glass ball’s surface.
  2. Now we do the same for the refraction. Note that, because the ray goes through the glass ball it is said to be a transmission ray (light has traveled from one side of the sphere to other; it was transmitted). To compute the transmission direction we need the normal at the hit point, the primary ray direction, and the refractive index of the material (in this example it may be something like 1.5 for glass material). With the new direction computed, the refractive ray continues on its course to the other side of the glass ball. There again, because it changes medium, the ray is refracted one more time. As you can see in the adjacent image, the direction of the ray changes when the ray enters and leaves the glass object. Refraction takes place every time there’s a change of medium and that two media, the one the ray exits from and the one it gets in, have a different index of refraction. As you probably know the refraction index of air is very close to 1 and the refraction index of glass is around 1.5). Refraction has for effect to bend the ray slightly. This process is what makes objects appear shifted when looking through or at objects of different refraction indexes. Let’s imagine now that when the refracted ray leaves the glass ball it hits a green sphere. There again we compute the local illumination at the point of intersection between the green sphere and refracted ray (by shooting a shadow ray). The color (black if it is shadowed) is then multiplied by the light intensity and returned to the glass ball’s surface
  3. Lastly, we compute the Fresnel equation. We need the refractive index of the glass ball, the angle between the primary ray, and the normal at the hit point. Using a dot product (we will explain that later), the Fresnel equation returns the two mixing values.

     
     

Here is some pseudo code to reinforce how it works:

 
 

One last, beautiful thing about this algorithm is that it is recursive (that is also a curse in a way, too!). In the case we have studied so far, the reflection ray hits a red, opaque sphere and the refraction ray hits a green, opaque, and diffuse sphere. However, we are going to imagine that the red and green spheres are glass balls as well. To find the color returned by the reflection and the refraction rays, we would have to follow the same process with the red and the green spheres that we used with the original glass ball. This is a serious drawback of the ray tracing algorithm and can actually be nightmarish in some cases. Imagine that our camera is in a box which has only reflective faces. Theoretically, the rays are trapped and will continue bouncing off of the box’s walls endlessly (or until you stop the simulation). For this reason, we have to set an arbitrary limit that prevents the rays from interacting, and thus recursing endlessly. Each time a ray is either reflected or refracted its depth is incremented. We simply stop the recursion process when the ray depth is greater than the maximum recursion depth.【这个算法是递归的,这点要注意,最好设置合理的条件已产生合理的结果。】

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

RayTracing – Implementing the Raytracing Algorithm

https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/implementing-the-raytracing-algorithm

 
 

We have covered everything there is to say! We are now prepared to write our first ray-tracer. You should now be able to guess how the ray-tracing algorithm works.【我们开始来实现算法】

 
 

First of all, take a moment to notice that the propagation of light in nature is just a countless number of rays emitted from light sources that bounce around until they hit the surface of our eye. Ray-tracing is, therefore, elegant in the way that it is based directly on what actually happens around us. Apart from the fact that it follows the path of light in the reverse order, it is nothing less that a perfect nature simulator.【光线在自然界的传播只是从光源发射的无数光线,它们会反射到我们的眼睛表面。
因此光线追踪的方式就是直接基于物理规则,除了它以相反的顺序沿着光的路径。】

 
 

 
 

The ray-tracing algorithm takes an image made of pixels. For each pixel in the image, it shoots a primary ray into the scene. The direction of that primary ray is obtained by tracing a line from the eye to the center of that pixel. Once we have that primary ray’s direction set, we check every object of the scene to see if it intersects with any of them. In some cases, the primary ray will intersect more than one object. When that happens, we select the object whose intersection point is the closest to the eye. We then shoot a shadow ray from the intersection point to the light (Figure 6, top). If this particular ray does not intersect an object on its way to the light, the hit point is illuminated. If it does intersect with another object, that object casts a shadow on it (figure 2).ray-tracing基于图片的pixel,对于每一个pixel,我们从眼睛所在位置向pixel位置发出射线,然后我们检查场景每一个物体与光线的相交关系。很多情况下会与多个物体相交,这时候我们处理离眼睛最近的那个对象。发射shadow light,如果这光线只与这对象相交,则是亮的,否则是其他物体投下的阴影区域】

 
 

 
 

If we repeat this operation for every pixel, we obtain a two-dimensional representation of our three-dimensional scene (figure 3).【遍历pixel获得图像结果】

 
 

 
 

Here is an implementation of the algorithm in pseudocode:【伪代码】

 
 

 
 

The beauty of ray-tracing, as one can see, is that it takes just a few lines to code; one could certainly write a basic ray-tracer in 200 lines. Unlike other algorithms, such as a scanline renderer, ray-tracing takes very little effort to implement.ray-trace的美妙在于,一个基本的实现就200行左右,如上图所示】

 
 

This technique was first described by Arthur Appel in 1969 by a paper entitled “Some Techniques for Shading Machine Renderings of Solids”. So, if this algorithm is so wonderful why didn’t it replace all the other rendering algorithms? The main reason, at the time (and even today to some extent), was speed. As Appel mentions in his paper:【这技术在1969年首次提出,但是在实际使用中没有推广的原因在于渲染时间还是很长】

 
 

“This method is very time consuming, usually requiring for useful results several thousands times as much calculation time as a wire frame drawing. About one half of of this time is devoted to determining the point to point correspondence of the projection and the scene.”

 
 

In other words, it is slow (but as Kajiya – one of the most influential researchers of all computer graphics history -once said: “ray tracing is not slow – computers are”). It is extremely time consuming to find the intersection between rays and geometry. For decades, the algorithm’s speed has been the main drawback of ray-tracing. However, as computers become faster, it is less and less of an issue. Although one thing must still be said: comparatively to other techniques, like the z-buffer algorithm, ray-tracing is still much slower. However, today, with fast computers, we can compute a frame that used to take one hour in a few minutes or less. In fact, real-time and interactive ray-tracers are a hot topic.【换句话说就是慢,射线求交慢,但是在硬件越来越好的情况下,这越来越不是问题。但是相对于光栅化来讲,还是非常慢,但是实时的光线追踪已经是很热门的研究课题。】

 
 

To summarize, it is important to remember (again) that the rendering routine can be looked at as two separate processes. One step determines if a point is visible at a particular pixel (the visibility part), the second shades that point (the shading part). Unfortunately, both of the two steps require expensive and time consuming ray-geometry intersection tests. The algorithm is elegant and powerful but forces us to trade rendering time for accuracy and vise versa. Since Appel published his paper a lot of research has been done to accelerate the ray-object intersection routines. By combining these acceleration schemes with the new technology in computers, it has become easier to use ray-tracing to the point where it has been used in nearly every production rendering software.【总结一下光线追踪渲染可以分为两步,首先决定这个对象对于这个像素是否可见,然后对于这个点调色。不过这两步都需要射线求交计算,都非常耗时。】

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

RayTracing – Raytracing Algorithm in a Nutshell

https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/raytracing-algorithm-in-a-nutshell

 
 

The phenomena described by Ibn al-Haytham explains why we see objects. Two interesting remarks can be made based on his observations: firstly, without light we cannot see anything and secondly, without objects in our environment, we cannot see light. If we were to travel in intergalactic space, that is what would typically happen. If there is no matter around us, we cannot see anything but darkness even though photons are potentially moving through that space. Ibn al-Haytham解释了我们为什么可以看到物体,是基于两个有趣的现象,首先是没有光线我们看不到任何东西,其次是没有物体的世界我们看不到光线。】

 
 

 
 

Forward Tracing

 
 

If we are trying to simulate the light-object interaction process in a computer generated image, then there is another physical phenomena which we need to be aware of. Compared to the total number of rays reflected by an object, only a select few of them will ever reach the surface of our eye. Here is an example. Imagine we have created a light source which emits only one single photon at a time. Now let’s examine what happens to that photon. It is emitted from the light source and travels in a straight line path until it hits the surface of our object. Ignoring photon absorption, we can assume the photon is reflected in a random direction. If the photons hits the surface of our eye, we “see” the point where the photon was reflected from (figure 1).【在模拟光照过程的时候,我们需要注意的是光线经过物体的反射,只有少部分光线进入眼睛,下图就是在说明这个事情。】

 
 

 
 

We can now begin to look at the situation in terms of computer graphics. First, we replace our eyes with an image plane composed of pixels. In this case, the photons emitted will hit one of the many pixels on the image plane, increasing the brightness at that point to a value greater than zero. This process is repeated multiple times until all the pixels are adjusted, creating a computer generated image. This technique is called forward ray-tracing because we follow the path of the photon forward from the light source to the observer.【我们来模拟这个过程,首先用Image代替眼睛,光线从光源出发,Image接收到光线就增加亮度,直到走完所有的光线。这个方法叫做forward ray-tracing。】

 
 

However do you see a potential problem with this approach?【但是你会发现这个方法存在问题】

 
 

The problem is the following: in our example we assumed that the reflected photon always intersected the surface of the eye. In reality, rays are essentially reflected in every possible direction, each of which have a very, very small probability of actually hitting the eye. We would potentially have to cast zillions of photons from the light source to find only one photon that would strike the eye. In nature this is how it works, as countless photons travel in all directions at the speed of light. In the computer world, simulating the interaction of that many photons with objects in a scene is just not practical solution for reasons we will now explain.【问题是我们只有投射足够量的光子,其中的一小部分才会真的与眼睛相交变成有效的画面的一部分】

 
 

So you may think: “Do we really need to shoot photons in random directions? Since we know the eye’s position, why not just send the photon in that direction and see which pixel in the image it passes through, if any?” That would certainly be one possible optimization, however we can only use this method for certain types of material. For reasons we will explain in a later lesson on light-matter interaction, directionality is not important for diffuse surfaces. This is because a photon that hits a diffuse surface can be reflected in any direction within the hemisphere centered around the normal at the point of contact. However, if the surface is a mirror, and does not have diffuse characteristics, the ray can only be reflected in a very precise direction; the mirrored direction (something which we will learn how to compute later on). For this type of surface, we can not decide to artificially change the direction of the photon if it’s actually supposed to follow the mirrored direction. Meaning that this solution is not completely satisfactory.【因此我们就想,我们怎样提高光子的投射效率,一种方法是人工干预方向,在每一次的折返射的时候摒弃掉一些方向的光线,但是这样的做法存在的问题是,对于镜子这样的对象你无法有效的处理】

 
 

Even if we do decide to use this method, with a scene made up of diffuse objects only, we would still face one major problem. We can visualize the process of shooting photons from a light into a scene as if you were spraying light rays (or small particles of paint) onto an object’s surface. If the spray is not dense enough, some areas would not be illuminated uniformly.【不用上述方法的另一个原因是对于场景中占大多数的diffuse的物体,你无法通过上述方法化简计算量】

 
 

Imagine that we are trying to paint a teapot by making dots with a white marker pen onto a black sheet of paper (consider every dot to be a photon). As we see in the image below, to begin with only a few photons intersect with the teapot object, leaving many uncovered areas. As we continue to add dots, the density of photons increases until the teapot is “almost” entirely covered with photons making the object more easily recognisable.【下图所示我们想绘制一个茶壶,这个方法的绘制过程表现就是一个一个随机的白点增加的过程】

 
 

But shooting 1000 photons, or even X times more, will never truly guarantee that the surface of our object will be totally covered with photons. That’s a major drawback of this technique. In other words, we would probably have to let the program run until we decide that it had sprayed enough photons onto the object’s surface to get an accurate representation of it. This implies that we would need watch the image as it’s being rendered in order to decide when to stop the application. In a production environment, this simply isn’t possible. Plus, as we will see, the most expensive task in a ray-tracer is finding ray-geometry intersections. Creating many photons from the light source is not an issue, but, having to find all of their intersections within the scene would be prohibitively expensive.【但问题在于实际实现的过程中,无论你发射了多少条有限的光线,你都很难把所有的茶壶中间的黑洞填白,这事情是不可控的,而且代价昂贵】

 
 

 
 

Conclusion: Forward ray-tracing (or light tracing because we shoot rays from the light) makes it technically possible simulate the way light travels in nature on a computer. However, this method, as discussed, is not efficient or practical. In a seminal paper entitled “An Improved Illumination Model for Shaded Display” and published in 1980, Turner Whitted (one of the earliest researchers in computer graphics) wrote:forward是一种计算机模拟的方式,但是这个方法不实用。An Improved Illumination Model for Shaded Display这篇写到:】

 
 

“In an obvious approach to ray tracing, light rays emanating from a source are traced through their paths until they strike the viewer. Since only a few will reach the viewer, this approach is wasteful. In a second approach suggested by Appel, rays are traced in the opposite direction, from the viewer to the objects in the scene”.forward这种方法太浪费了,我们是否反过来思考光线的走势】

 
 

We will now look at this other mode, Whitted talks about.

 
 

Backward Tracing

 
 

Instead of tracing rays from the light source to the receptor (such as our eye), we trace rays backwards from the receptor to the objects. Because this direction is the reverse of what happens in nature, it is fittingly called backward ray-tracing or eye tracing because we shoot rays from the eye position?(figure 2). This method provides a convenient solution to the flaw of forward ray-tracing. Since our simulations cannot be as fast and as perfect as nature, we must compromise and trace a ray from the eye into the scene. If the ray hits an object then we find out how much light it receives by throwing another ray (called a light or shadow ray) from the hit point to the scene’s light. Occasionally this “light ray” is obstructed by another object from the scene, meaning that our original hit point is in a shadow; it doesn’t receive any illumination from the light. For this reason, we don’t name these rays light rays?but instead shadow rays. In CG literature, the first ray we shoot from the eye into the scene is called a primary ray, visibility ray, or camera ray.【我们来看反向光线追踪,如下图所示,其做法就是光线从眼睛出发反向去传播,直到回传到光源。】

 
 

 
 

Conclusion

 
 

In computer graphics the concept of shooting rays either from the light or from the eye is called path tracing. The term ray-tracing can also be used but the concept of path tracing suggests that this method of making computer generated images relies on following the path from the light to the camera (or vice versa). By doing so in an physically realistic way, we can easily simulate optical effects such caustics or the reflection of light by other surface in the scene (indirect illumination). These topics will be discussed in other lessons.【在计算机图形学中,从光线或从眼睛射出射线的概念被称为路径追踪。
术语光线跟踪也可以使用,但路径跟踪的概念表明,这种制作计算机生成图像的方法依赖于从光源到相机的路径(反之亦然)。 通过物理上逼真的方式,我们可以很容易地模拟光学效应,如焦场或场景中其他表面的反射(间接照明)。 这些主题将在其他课程中讨论。】

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

RayTracing – How Does It Work?

https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/how-does-it-work

 
 

To begin this lesson, we will explain how a three-dimensional scene is made into a viewable two-dimensional image. Once we understand that process and what it involves, we will be able to utilize a computer to simulate an “artificial” image by similar methods. We like to think of this section as the theory that more advanced CG is built upon.【这课程我们首先来解释怎么从3D场景获得2D图像】

 
 

In the second section of this lesson, we will introduce the ray-tracing algorithm and explain, in a nutshell, how it works. We have received email from various people asking why we are focused on ray-tracing rather than other algorithms. The truth is, we are not. Why did we chose to focus on ray-tracing in this introductory lesson? Simply because this algorithm is the most straightforward way of simulating the physical phenomena that cause objects to be visible. For that reason, we believe ray-tracing is the best choice, among other techniques, when writing a program that creates simple images.【然后我们会介绍ray-tracing算法,仅仅因为这个算法是模拟引起物体可见的物理现象的最直接的方式。
出于这个原因,我们相信在编写创建简单图像的程序时,光线跟踪是最好的选择,所以我们要学习和了解这个。】

 
 

To start, we will lay the foundation with the ray-tracing algorithm. However, as soon as we have covered all the information we need to implement a scanline renderer, for example, we will show how to do that as well.【在了解ray-tracing之前,我们首先回顾一下扫描线算法】

 
 

 
 

How Does an Image Get Created?

 
 

Although it seems unusual to start with the following statement, the first thing we need to produce an image, is a two-dimensional surface (this surface needs to be of some area and cannot be a point). With this in mind, we can visualize a picture as a cut made through a pyramid whose apex is located at the center of our eye and whose height is parallel to our line of sight (remember, in order to see something, we must view along a line that connects to that object). We will call this cut, or slice, mentioned before, the image plane (you can see this image plane as the canvas used by painters). An image plane is a computer graphics concept and we will use it as a two-dimensional surface to project our three-dimensional scene upon. Although it may seem obvious, what we have just described is one of the most fundamental concepts used to create images on a multitude of different apparatuses. For example, an equivalent in photography is the surface of the film (or as just mentioned before, the canvas used by painters).【根据图形学的概念渲染就是用2D Image来展示3D 场景】

 
 

 
 

 
 

Perspective Projection

 
 

Let’s imagine we want to draw a cube on a blank canvas. The easiest way of describing the projection process is to start by drawing lines from each corner of the three-dimensional cube to the eye. To map out the object’s shape on the canvas, we mark a point where each line intersects with the surface of the image plane. For example, let us say that c0 is a corner of the cube and that it is connected to three other points: c1c2, and c3. After projecting these four points onto the canvas, we get c0′c1′c2′, and c3′. If c0c1 defines an edge, then we draw a line from c0′ to c1′. If c0c2 defines an edge, then we draw a line from c0′ to c2′.【在image上绘制一个Cube,最简单的方法就是顶点投影,然后顶点之间的连线处理】

 
 

If we repeat this operation for remaining edges of the cube, we will end up with a two-dimensional representation of the cube on the canvas. We have then created our first image using perspective projection. If we continually repeat this process for each object in the scene, what we get is an image of the scene as it appears from a particular vantage point. It was only at the beginning of the 15th century that painters started to understand the rules of perspective projection.【重复上述方法到6个面,就画完了Cube,在重复用于场景每一个物体,就渲染完成。这就是15世纪,画家从这方法开始理解透视】

 
 

 
 

 
 

Light and Color

 
 

Once we know where to draw the outline of the three-dimensional objects on the two-dimensional surface, we can add colors to complete the picture.【上面画完线框,下面上色】

 
 

To summarize quickly what we have just learned: we can create an image from a three-dimensional scene in a two step process. The first step consists of projecting the shapes of the three-dimensional objects onto the image surface (or image plane). This step requires nothing more than connecting lines from the objects features to the eye. An outline is then created by going back and drawing on the canvas where these projection lines intersect the image plane. As you may have noticed, this is a geometric process. The second step consists of adding colors to the picture’s skeleton.【快速总结,创建Image分为两步:第一步是投影,第二步是上色】

 
 

An object’s color and brightness, in a scene, is mostly the result of lights interacting with an object’s materials. Light is made up of photons (electromagnetic particles) that have, in other words, an electric component and a magnetic component. They carry energy and oscillate like sound waves as they travel in straight lines. Photons are emitted by a variety of light sources, the most notable example being the sun. If a group of photons hit an object, three things can happen: they can be either absorbed, reflected or transmitted. The percentage of photons reflected, absorbed, and transmitted varies from one material to another and generally dictates how the object appears in the scene. However, the one rule that all materials have in common is that the total number of incoming photons is always the same as the sum of reflected, absorbed and transmitted photons. In other words, if we have 100 photons illuminating a point on the surface of the object, 60 might be absorbed and 40 might be reflected. The total is still 100. In this particular case, we will never tally 70 absorbed and 60 reflected, or 20 absorbed and 50 reflected because the total of transmitted, absorbed and reflected photons has to be 100.【物体的颜色和亮度,是物体材质和光照合力的结果,具体解释就是光学那套。】

 
 

In science, we only differentiate two types of materials, metals which are called conductors and dielectrics. Dielectris include things such a glass, plastic, wood, water, etc. These materials have the property to be electrical insulators (pure water is an electrical insulator). Note that a dielectric material can either be transparent or opaque. Both the glass balls and the plastic balls in the image below are dielectric materials. In fact, every material is in away or another transparent to some sort of electromagnetic radiation. X-rays for instance can pass through the body.【材质分类我们只关心透明和不透明,不透明的会挡住光线穿过】

 
 

An object can also be made out of a composite, or a multi-layered, material. For example, one can have an opaque object (let’s say wood for example) with a transparent coat of varnish on top of it (which makes it look both diffuse and shiny at the same time like the colored plastic balls in the image below).【还有一种是半透明,比如皮肤这种,可以看作是有多层材质】

 
 

 
 

Let’s consider the case of opaque and diffuse objects for now. To keep it simple, we will assume that the absorption process is responsible for the object’s color. White light is made up of “red”, “blue”, and “green” photons. If a white light illuminates a red object, the absorption process filters out (or absorbs) the “green” and the “blue” photons. Because the object does not absorb the “red” photons, they are reflected. This is the reason why this object appears red. Now, the reason we see the object at all, is because some of the “red” photons reflected by the object travel towards us and strike our eyes. Each point on an illuminated area, or object, radiates (reflects) light rays in every direction. Only one ray from each point strikes the eye perpendicularly and can therefore be seen. Our eyes are made of photoreceptors that convert the light into neural signals. Our brain is then able to use these signals to interpret the different shades and hues (how, we are not exactly sure). This a very simplistic approach to describe the phenomena involved. Everything is explained in more detail in the lesson on color (which you can find in the section Mathematics and Physics for Computer Graphics.【光照原理的例子,初中物理不解释】

 
 

 
 

Like the concept of perspective projection, it took a while for humans to understand light. The Greeks developed a theory of vision in which objects are seen by rays of light emanating from the eyes. An Arab scientist, Ibn al-Haytham (c. 965-1039), was the first to explain that we see objects because the sun’s rays of light; streams of tiny particles traveling in straight lines were reflected from objects into our eyes, forming images (Figure 3). Now let us see how we can simulate nature with a computer!【这哥们第一次解释我们看到物体是因为光照。下面我们开始讲解怎么用计算机模拟这个物理现象】