软光 – the projection stage
Quick Review
In the previous chapter, we gave a highlevel overview of the rasterization rendering technique. It can be decomposed into two main stages: first, the projection of the triangle’s vertices onto the canvas, then the rasterization of the triangle itself. Rasterization really means in this case, “breaking apart” the triangle’s shape into pixels or raster element squares; this is how pixels used to be called in the past. In this chapter, we will review the first step. We have already described this method in the two previous lessons, thus we won’t explain it here again. If you have any doubt about the principles behind perspective projection, check these lessons again. However, in this chapter, we will study a couple of new tricks related to projection that are going to be useful when we will get to the lesson on the perspective projection matrix. We will learn about a new method to remap the coordinates of the projected vertices from screen space to NDC space. We will also learn more about the role of the zcoordinate in the rasterization alogrithm and how it should be handled at the projection stage.【上一篇讲了整体思路，主要步骤。这一篇主要讲的是第一步骤的做法】
Keep in mind as already mentioned in the previous chapter, that the goal of the rasterization rendering technique is to solve the visibility or hidden surface problem, which is to determine with parts of a 3D object are visible and which parts are hidden.【时刻记住软光栅是用来做可见性判断的】
Projection: What Are We Trying to Solve?
What are we trying to solve here at that stage of the rasterization algorithm? As explained in the previous chapter, the principle of rasterization is to find if pixels in the image overlap triangles. To do so, we first need to project triangles onto the canvas, and then convert their coordinates from screen space to raster space. Pixels and triangles are then defined in the same space, which means that it becomes possible to compare their respective coordinates (we can check the coordinates of a given pixel against the rasterspace coordinates of a triangle’s vertices).
The goal of this stage, is thus to convert the vertices making up triangles from camera space to raster space.
【我们这里要解决的是3D到2D平面】
Projecting Vertices: Mind the ZCoordinate!
In the previous two lessons, we mentioned that when we compute the raster coordinates of a 3D point what we really need in the end are its x and ycoordinates (the position of the 3D point in the image). As a quick reminder, recall that these 2D coordinates are obtained by dividing the x and y coordinates of the 3D point in camera space, by the point respective zcoordinate (what we called the perspective divide), and then remapping the resulting 2D coordinates from screen space to NDC space and then NDC space to raster space. Keep in mind that because the image plane is positioned at the near clipping plane, we also need to multiply the x and ycoordinate by the near clipping plane. Again, we explained this process in great detail in the previous two lessons.
Note that so far, we have been considering points in screen space as essentially 2D points (we didn’t need to use the points’ zcoordinate after the perspective divide). From now on though, we will declare points in screenspace, as 3D points and set their zcoordinate to the cameraspace points’ zcoordinate as follow:
【x,y坐标转换到2D空间，z值也需要处理，获得屏幕空间x,y,z如下】
It is best at this point to set the projected point zcoordinate to the inverse of the original point zcoordinate, which as you know by now, is negative. Dealing with positive zcoordinates will make everything simpler later on (but this is not mandatory).【注意z值取反】
Keeping track of the vertex zcoordinate in camera space is needed to solve the visibility problem. Understanding why is easier if you look at figure 1. Imagine two vertices v1 and v2 which when projected onto the canvas, have the same raster coordinates (as shown in figure 1). If we project v1 before v2 then v2 will be visible in the image when it should actually be v1 (v1 is clearly in front of v2). However if we store the zcoordinate of the vertices along with their 2D raster coordinates, we can use these coordinates to define which point is closest to camera independently of the order in which the vertices are projected (as shown in the code fragment below).【保持z值存储就是为了解决可见性判断的问题，方法就是与zbuffer里面的比较看遮挡关系，如图】
What we want to render though are triangles not vertices. So the question is, how does the method we just learned about apply to triangles? In short, we will use the triangle vertices coordinates to find the position of the point on the triangle that the pixel overlaps (and thus its zcoordinate). This idea is illustrated in figure 2. If a pixel overlaps two or more triangles, we should be able to compute the position of the points on the triangles that the pixel overlap, and use the zcoordinates of these points as we did with the vertices, to know which triangle is the closest to the camera. This method will be described in detail in chapter 4 (The Depth Buffer. Finding the Depth Value of a Sample by Interpolation).【如何比较Z值的问题，就是采用zbuffer方式，前面讲过，细节看第四章节】
Screen Space is Also ThreeDimensional
To summarize, to go from camera space to screen space (which is the process during which the perspective divide is happening), we need to:【总结一下】
 Perform the perspective divide: that is dividing the point in camera space x and ycoordinate by the point zcoordinate.【xy值由坐标转换得到】
 But also set the projected point zcoordinate to the original point zcoordinate (the point in camera space).【z值由原来的z得到】
Practically, this means that our projected point is not a 2D point anymore, but in fact a 3D point. Or to say it differently, that screen space is not two by threedimensional. In his thesis EdCatmull writes:【由上面可以看出screen space也是一个3D空间】
Screenspace is also threedimensional, but the objects have undergone a perspective distortion so that an orthogonal projection of the object onto the xy plane, would result in the expected perspective image (EdCatmull’s Thesis, 1974).
You should now be able to understand this quote. The process is also illustrated in figure 3. First the geometry vertices are defined in camera space (top image). Then, each vertex undergoes a perspective divide. That is, the vertex x and ycoordinates are divided by its zcoordinate, but as mentioned before, we also set the resulting projected point zcoordinate to the inverse of the original vertex zcoordinate. This by the way, infers a change of direction in the zaxis of the screen space coordinate system. As you can see, the zaxis is now pointing inward rather than outward (middle image in figure 3). But the most important thing to notice, is that the resulting object is a deformed version of the original object but nonetheless a threedimensional object. Furthermore what EdCatmull means when he writes “an orthogonal projection of the object onto the xy plane, would result in the expected perspective image”, is that once the object is in screen space, if we trace lines perpendicular to the xy image plane from the object to the canvas, then we get an perspective representation of that object (as shown in figure 4). This is an interesting observation because it means that the image creation process can be seen as a perspective projection followed by an orthographic projection. Don’t worry if you don’t understand clearly the difference between the perspective and orthographic projection. It is the topic of the next lesson. However, try to remember this observation, as it will become handy later.【如图三和四，展示的是投影】
Remapping Screen Space Coordinates to NDC Space
In the previous two lessons, we explained that once in screen space, the x and ycoordinates of the projected points need to be remapped to NDC space. In the previous lessons, we also explained that in NDC space, points on the canvas had their x and ycoordinates contained in the range [0,1]. In the GPU world though, coordinates in NDC space are contained in the range [1,1]. Sadly, this is one of these conventions again, that we need to deal with. We could have kept the convention [0,1] but because GPUs are the reference when it comes to rasterization, it is best to stick to the way the term is defined in the GPU world.【screen space 首先要转换到 NDC space，坐标范围可以是[0,1]，GPU上面则通常为[1, 1]】
Thus once the points have been converted from camera space to screen space, the next step is to remap them from the range [l,r] and [b,t] for the x and ycoordinate respectively, to the range [0,1]. The term l, r, b, t here denotes the left, right, bottom and top coordinates of the canvas. By rearranging the terms, we can easily find a equation that performs the remapping we want:【points从camera space to screen space，要做的就是xy坐标由[l, r][b, t]转换到[–1，1]，推导如下所示】
This is a very important equation because the red and green term of the equation in the middle of the formula will become coefficients of the perspective projection matrix. We will study this matrix in the next lesson. But for now, we will just apply this equation to remap the xcoordinate of a point in screen space to NDC space (any point that lies on the canvas has its coordinates contained in the range [1.1] when defined in NDC space). If we apply the same reasoning to the ycoordinate we get:【上面中间部分直接就可以转换为坐标转换的矩阵参数，结果就是从屏幕空间坐标转换到了NDC space。同样的方法用到y轴结果如下】
Putting Things Together
At the end of this lesson, we now can perform the first stage of the rasterization algorithm which you can decompose into two steps:【这节讲的两个步骤如下】

Convert a point in camera space to screen space. It essentially projects a point onto the canvas, but keep in mind that we also need to store the original point zcoordinate. The point in screenspace is treedimensional and the zcoordinate will be useful to solve the visibility problem later on.【camera space to screen space】

We then convert the x and ycoordinates of these points in screen space to NDC space using the following formulas:【screen space to NDC space】
From there, it is extremely simple to convert the coordinates to raster space. We just need to remap the x and ycoordinates in NDC space to the range [0,1] and multiply the resulting number by the image width and height respectively (don’t forget that in raster space the yaxis goes down while in NDC space it goes up. Thus we need to change y’s direction during this remapping process). In code we get: