请教什么是gpu 光栅化化?gpu 光栅化化都进行了啥操作

2507人阅读
OpenGL杂谈(14)
图形学杂谈(6)
光栅化是在计算机上生成图像的重要步骤,然而不管是opengl还是directx还是其它的图形接口都封装了光栅化方法.我自己做了个光栅器,接下来就说一下如何实现光栅化的.
为什么要光栅化?
图形管线的输入是图元顶点,输出的则是像素(pixel),这个步骤当中还有个中间产物叫做片段(fragment),一个片段对应一个像素,但片段比像素多了用于计算的属性,例如:深度值和法向量. 通过片段可以计算出最终将要生成像素的颜色值,我们把输入顶点计算片段的过程叫作光栅化.为什么要光栅化?因为要生成用以计算最终颜色的片段.
光栅化的输入和输出分别是啥?
和普通函数一样,光栅化函数也需要输入和输出,从之前的定义来看函数的输入就是组成图元的顶点结构,输出的就是片段结构,为什么说是结构?因为这些可以用c语言中的struct描述.
光栅化发生在哪一步?
通常在图形接口中会暴露顶点处理程序和片段处理程序(感觉着色器听起来也是云里雾里就换成处理程序),但是这当中gpu会进行光栅化插值计算,这也就是为什么片段处理程序的input是顶点处理程序的output经过了插值以后得到的值.既然光栅化是在顶点处理程序以后发生的步骤,那么输入的顶点结构是经过顶点处理以后的,也就是进行过mvp变换,乘以透视矩阵之后的顶点,注意:这步还没有做透视除法,光栅化插值发生在裁剪空间,绝不是标准化空间,所以顶点位置是四维齐次坐标不是三维坐标!
怎么实现光栅化方法?
首先我们可以确定的是光栅化的输入和输出分别是啥.并且应该知道手上可以是用的数据都是啥.
先对输入的顶点进行处理变换到屏幕坐标,对把裁剪空间的顶点坐标转换成标准化空间,就像这样:
ndcA.x=clipA.x/clipA.w;
ndcA.y=clipA.y/clipA.w;
ndcB.x=clipB.x/clipB.w;
ndcB.y=clipB.y/clipB.w;
ndcC.x=clipC.x/clipC.w;
ndcC.y=clipC.y/clipC.w;
接着对顶点的标准坐标进行视口变换:
viewPortTransform(face-&ndcA.x,face-&ndcA.y,fb-&width,fb-&height,scrAX,scrAY);
viewPortTransform(face-&ndcB.x,face-&ndcB.y,fb-&width,fb-&height,scrBX,scrBY);
viewPortTransform(face-&ndcC.x,face-&ndcC.y,fb-&width,fb-&height,scrCX,scrCY);
然后得到三个二维坐标代表三个顶点最终在屏幕上的位置,它们可以组成一个二维三角形,求取三角形的包围盒:
int minX=max(0,min(scrAX,min(scrBX,scrCX)));
int maxX=min(fb-&width-1,max(scrAX,max(scrBX,scrCX)));
int minY=max(0,min(scrAY,min(scrBY,scrCY)));
int maxY=min(fb-&height-1,max(scrAY,max(scrBY,scrCY)));要注意不要超过屏幕范围,屏幕范围以外的点都裁剪掉.
遍历这个包围盒,取得潜在可能片段的屏幕位置:
for(int scrX=minX;scrX&=maxX;scrX++) {
for(int scrY=minY;scrY&=maxY;scrY++) {
分别求取片段对应的标准化空间坐标:
invViewPortTransform(scrX,scrY,fb-&width,fb-&height,ndcX,ndcY);这里用了逆视口变换,视口变换和逆视口变换很方便,只要对坐标进行缩放和平移就行了.
那么我们得到了可能片段的标准化空间的x和y坐标,为什么是可能片段呢?因为现在还没法确定这些片段在将要被光栅化三角形的外部还是内部,我们只计算三角形内部的片段.
然而知道了这些有什么用呢?
这边有一个公式可以算出三个顶点对片段产生影响的比例,也叫权值:
这个公式的a b c分别代表三角形的三个顶点, ax ay aw 分别是顶点a在裁剪空间的齐次坐标(是四维的)的x y w值,这边没用到z值,因为z也要通过这个权值进行计算.
这个怎么推导这个公式?
已知待光栅化三角形abc的三个顶点在裁剪空间的齐次坐标,把权值alpha beta gamma设为pa pb pc,可得每个片段的裁剪空间齐次坐标为:
x=pa*ax+pb*bx+pc*cx
y=pa*ay+pb*by+pc*cy
z=pa*az+pb*bz+pc*cz
w=pa*aw+pb*bw+pc*cw
然后计算片段在标准化坐标系的坐标值为:
x=pa*ax+pb*bx+pc*cx
y=pa*ay+pb*by+pc*cy
w=pa*aw+pb*bw+pc*cw
转换为3x3矩阵就是
ax & &bx & &cx & & & & & & &pa & & & & & &w*nx
ay & &by & &cy & & &* & & &pb & & = & & w*ny
aw & bw & cw & & & & & & pc & & & & & &w
其中nx和ny就是之前取得的片段在标准化坐标系的x y值;并且由于pa pb pc是比值,所以w可以去除;这样只要求取3x3矩阵的逆就可以取得pa pb pc的值.
但是要注意pa+pb+pc=1,所以计算出值以后要进行如下处理:
float sum=pa+pb+
pa/= pb/= pc/=
然后把有比值小于0的片段抛弃:
if(pa&0||pb&0||pc&0)
接下来就可以用这三个权值对顶点属性进行插值运算了.
具体的光栅化函数是这样:
void rasterize(FrameBuffer* fb,DepthBuffer* db,FragmentShader fs,Face* face) {
float ndcX=0,ndcY=0,clipW=0;
int scrAX,scrAY,scrBX,scrBY,scrCX,scrCY;
viewPortTransform(face-&ndcA.x,face-&ndcA.y,fb-&width,fb-&height,scrAX,scrAY);
viewPortTransform(face-&ndcB.x,face-&ndcB.y,fb-&width,fb-&height,scrBX,scrBY);
viewPortTransform(face-&ndcC.x,face-&ndcC.y,fb-&width,fb-&height,scrCX,scrCY);
int minX=max(0,min(scrAX,min(scrBX,scrCX)));
int maxX=min(fb-&width-1,max(scrAX,max(scrBX,scrCX)));
int minY=max(0,min(scrAY,min(scrBY,scrCY)));
int maxY=min(fb-&height-1,max(scrAY,max(scrBY,scrCY)));
for(int scrX=minX;scrX&=maxX;scrX++) {
for(int scrY=minY;scrY&=maxY;scrY++) {
invViewPortTransform(scrX,scrY,fb-&width,fb-&height,ndcX,ndcY);
VECTOR4D ndcPixel(ndcX,ndcY,1,0);
VECTOR4D proportion4D=face-&clipMatrixInv*ndcP
VECTOR3D proportionFragment(proportion4D.x,proportion4D.y,proportion4D.z);
float pa=proportionFragment.x;
float pb=proportionFragment.y;
float pc=proportionFragment.z;
float sum=pa+pb+
pa/= pb/= pc/=
if(pa&0||pb&0||pc&0)
interpolate3f(pa,pb,pc,face-&clipA.w,face-&clipB.w,face-&clipC.w,clipW);
interpolate3f(pa,pb,pc,face-&clipA.z,face-&clipB.z,face-&clipC.z,frag.ndcZ);
frag.ndcZ/=clipW;
if(frag.ndcZ&-1||frag.ndcZ&1)
if(db!=NULL) {
float storeZ=readDepth(db,scrX,scrY);
if(storeZ&frag.ndcZ)
writeDepth(db,scrX,scrY,frag.ndcZ);
interpolate3f(pa,pb,pc,face-&clipA.x,face-&clipB.x,face-&clipC.x,frag.ndcX);
frag.ndcX/=clipW;
interpolate3f(pa,pb,pc,face-&clipA.y,face-&clipB.y,face-&clipC.y,frag.ndcY);
frag.ndcY/=clipW;
interpolate3f(pa,pb,pc,face-&clipA.nx,face-&clipB.nx,face-&clipC.nx,frag.nx);
interpolate3f(pa,pb,pc,face-&clipA.ny,face-&clipB.ny,face-&clipC.ny,frag.ny);
interpolate3f(pa,pb,pc,face-&clipA.nz,face-&clipB.nz,face-&clipC.nz,frag.nz);
interpolate3f(pa,pb,pc,face-&clipA.s,face-&clipB.s,face-&clipC.s,frag.s);
interpolate3f(pa,pb,pc,face-&clipA.t,face-&clipB.t,face-&clipC.t,frag.t);
FragmentOut outF
fs(frag,outFrag);
drawPixel(fb,scrX,scrY,outFrag.r,outFrag.g,outFrag.b);
光栅化完成了,这下就能自己实现opengl和directx了!
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:37946次
排名:千里之外
原创:23篇
评论:29条
(1)(3)(1)(2)(2)(2)(1)(5)(1)(1)(2)(2)5281人阅读
ios学习(20)
一、先了解下&什么是光栅化及光栅化的简单过程?
光栅化是将几何数据经过一系列变换后最终转换为像素,从而呈现在显示设备上的过程,如下图:
光栅化的本质是坐标变换、几何离散化,如下图:
有关光栅化过程的详细内容有空再补充。
二、以下内容展示纹素到像素时的一些细节:
When rendering 2D output using pre-transformed vertices, care must be taken to ensure that each texel area correctly corresponds to a single pixel area, otherwise texture distortion can occur. By understanding the basics of the process that Direct3D follows
when rasterizing and texturing triangles, you can ensure your Direct3D application correctly renders 2D output.
当使用已经执行过顶点变换的顶点作为2D输出平面的时候,我们必须确保每个纹素正确的映射到每个像素区域,否则纹理将产生扭曲,通过理解Direct3D在光栅化和纹理采样作遵循的基本过程,你可以确保你的Direct3D程序正确的输出一个2D图像。
&&&&&&& 图1: 6 x 6 resolution display
Figure 1 shows a diagram wherein pixels are modeled as squares. In reality, however, pixels are dots, not squares. Each square in Figure 1 indicates the area lit by the pixel, but a pixel is always just a dot at the center of a square. This distinction, though
seemingly small, is important. A better illustration of the same display is shown in Figure 2:
图片1展示了用一个方块来描述像素的。实际上,像素是点,不是方块,每个图片1中的方块表明了被一个像素点亮的区域,然而像素始终是方块中间的一个点,这个区别,看起来很小,但是很重要。图片二展示了一种更好的描述方式。
&&&&&& 图 2: Display is composed of pixels
This diagram correctly shows each physical pixel as a point in the center of each cell. The screen space coordinate (0, 0) is located directly at the top-left pixel, and therefore at the center of the top-left cell. The top-left corner of the display is therefore
at (-0.5, -0.5) because it is 0.5 cells to the left and 0.5 cells up from the top-left pixel. Direct3D will render a quad with corners at (0, 0) and (4, 4) as illustrated in Figure 3.
这张图正确地通过一个点来描述每个单元中央的物理像素。屏幕空间的坐标原点(0,0)是位于左上角的像素,因此就在最左上角的方块的中央。最左上角方块的最左上角因此是(-0.5,-0.5),因为它距最左上角的像素是(-0.5,-0.5)个单位。Direct3D将会在(0,0)到(4,4)的范围内渲染一个矩形,如图3所示
&&&&&&&&&&&&&&&&&&&&&&&&& 图3
Figure 3 shows where the mathematical quad is in relation to the display, but does not show what the quad will look like once Direct3D rasterizes it and sends it to the display. In fact, it is impossible for a raster display to fill the quad exactly as shown
because the edges of the quad do not coincide with the boundaries between pixel cells. In other words, because each pixel can only display a single color, each pixel cell is filled with if the display were to render the quad exactly as
shown, the pixel cells along the quad's edge would need to show two distinct colors: blue where covered by the quad and white where only the background is visible.
Instead, the graphics hardware is tasked with determining which pixels should be filled to approximate the quad. This process is called rasterization, and is detailed in. For this particular case, the rasterized quad is shown in Figure 4:
图片3展示了数学上应该显示的矩形。但是并不是Direct3D光栅化之后的样子。实际上,像图3这样光栅化是根本不可能的,因为每个像素点亮区域只能是一种颜色,不可能一半有颜色一半没有颜色。如果可以像上面这样显示,那么矩形边缘的像素区域必须显示两种不同的颜色:蓝色的部分表示在矩形内,白色的部分表示在矩形外。
因此,图形硬件将会执行判断哪个像素应该被点亮以接近真正的矩形的任务。这个过程被称之为光栅化,详细信息请查阅.。对于我们这个特殊的例子,光栅化后的结果如图4所示
&&&&&&&&&&&&&&&&&&&&&&&&& 图4
Note that the quad passed to Direct3D (Figure 3) has corners at (0, 0) and (4, 4), but the rasterized output (Figure 4) has corners at (-0.5,-0.5) and (3.5,3.5). Compare Figures 3 and 4 for rendering differences. You can see that what the display actually renders
is the correct size, but has been shifted by -0.5 cells in the x and y directions. However, except for multi-sampling techniques, this is the best possible approximation to the quad. (See the&for thorough coverage of multi-sampling.) Be aware that if the rasterizer filled every cell the quad crossed, the resulting area would be of dimension 5 x 5 instead of the desired 4 x 4.
If you assume that screen coordinates originate at the top-left corner of the display grid instead of the top-left pixel, the quad appears exactly as expected. However, the difference becomes clear when the quad is given a texture. Figure 5 shows the 4 x 4
texture you'll map directly onto the quad.
注意我们传给Direct3D(图三)的两个角的坐标为(0,0)和(4,4)(相对于物理像素坐标)。但是光栅化后的输出结果(图4)的两个角的坐标为(-0.5,-0.5)和(3.5,3.5)。比较图3和图4,的不同之处。你可以看到图4的结果才是正确的矩形大小。但是在x,y方向上移动了-0.5个像素矩形单位。然而,抛开multi-sampling技术,这是接近真实大小矩形的最好的光栅化方法。注意如果光栅化过程中填充所有被覆盖的物理像素的像素区域,那么矩形区域将会是5x5,而不是4x4.
如果你结社屏幕坐标系的原点在最左上角像素区域的最左上角,而不是最左上角的物理像素,这个方块显示出来和我们想要的一样。然而当我们给定一个纹理的时候,区别就显得异常突出了,图5 展示了一个用于映射到我们的矩形的4x4的纹理。
&&&&&&&&&&&&&&&&&& 图5
Because the texture is 4 x 4 texels and the quad is 4 x 4 pixels, you might expect the textured quad to appear exactly like the texture regardless of the location on the screen where the quad is drawn. However,
even slight changes in position
influence how the texture is displayed. Figure 6 illustrates how a quad between (0, 0) and (4, 4) is displayed after being rasterized and textured.
&&&&&& 因为纹理有4x4个纹素,并且矩形是4x4个像素,你可能想让纹理映射后的矩形就像纹理图一样。然而,事实上并非如此,一个位置点的轻微变化也会影响贴上纹理后的样子,图6阐释了一个(0,0)(4,4)的矩形被光栅化和纹理映射后的样子。
&&&&&&&&&&&&&&&&&&&&&&&&& 图6
The quad drawn in Figure 6 shows the textured output (with a linear filtering mode and a clamp addressing mode) with the superimposed rasterized outline. The rest of this article explains exactly why the output looks the way it does instead of looking like
the texture, but for those who want the solution, here it is: The edges of the input quad need to lie upon the boundary lines between pixel cells. By simply shifting the x and y quad coordinates by -0.5 units, texel cells will perfectly cover pixel cells and
the quad can be perfectly recreated on the screen. (Figure 8 illustrates the quad at the corrected coordinates.)
图6中展示了贴上纹理后的矩形(使用线性插值模式和CLAMP寻址模式),文中剩下的部分将会解释为什么他看上去是这样而不像我们的纹理图。先提供一个解决这个问题的方法:输入的矩形的边界线需要位于两个像素区域之间。通过简单的将x和y值移动-0.5个像素区域单位,纹素将会完美地覆盖到矩形区域并且在屏幕上重现(图8阐释了这个完美覆盖的正确的坐标)(译者:这里你创建的窗口的坐标必须为整数,因此位于像素区域的中央,你的客户区屏幕最左像素区域的边界线在没有进行移位-0.5之前也必位于某个像素区域的中央)
The details of why the rasterized output only bears slight resemblance to the input texture are directly related to the way Direct3D addresses and samples textures. What follows assumes you have a good understanding of&And&.
关于为什么光栅化和纹理映射出来的图像只有一点像我们的原始纹理图的原因和Direct3D纹理选址模式和过滤模式有关。详情见And.
Getting back to our investigation of the strange pixel output, it makes sense to trace the output color back to the pixel shader: The pixel shader is called for each pixel selected to be part of the rasterized shape. The solid blue quad depicted in Figure 3
could have a particularly simple shader:
回到我们调查为什么会输出奇怪像素的过程中,为了追踪输出的颜色,我们看看像素着色器:像素作色器在光栅后的图形中的每个像素都会被调用一次。图3中蓝色的线框围绕的矩形区域都会使用一个简单的作色器:
float4 SolidBluePS() : COLOR
return float4( 0, 0, 1, 1 );
For the textured quad, the pixel shader has to be changed slightly:
texture MyT
sampler MySampler =
sampler_state
Texture = &MyTexture&;
MinFilter = L
MagFilter = L
AddressU = C
AddressV = C
float4 TextureLookupPS( float2 vTexCoord : TEXCOORD0 ) : COLOR
return tex2D( MySampler, vTexCoord );
That code assumes the 4 x 4 texture of Figure 5 is stored in MyTexture. As shown, the MySampler texture sampler is set to perform bilinear filtering on MyTexture. The pixel shader gets called once for each rasterized pixel, and each time the returned color
is the sampled texture color at vTexCoord. Each time the pixel shader is called, the vTexCoord argument is set to the texture coordinates at that pixel. That means the shader is asking the texture sampler for the filtered texture color at the exact location
of the pixel, as detailed in Figure 7:
代码假设图5中的4x4的纹理存储在MyTexture中。MySampler被设置成双线性过滤。光栅化每个像素的时候调用一次这个Shader.每次返回的颜色值都是对sampled texture使用vTexCoord取样的结果,vTexCoord是物理像素值处的纹理坐标。这意味着在每个像素的位置都会查询纹理以得到这点的颜色值。详情如图7所示:
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 图7
The texture (shown superimposed) is sampled directly at pixel locations (shown as black dots). Texture coordinates are not affected by rasterization (they remain in the projected screen-space of the original quad). The black dots show where the rasterization
pixels are. The texture coordinates at each pixel are easily determined by interpolating the coordinates stored at each vertex: The pixel at (0,0) coincides with the vertex at (0, 0); therefore, the texture coordinates at that pixel are simply the texture
coordinates stored at that vertex, UV (0.0, 0.0). For the pixel at (3, 1), the interpolated coordinates are UV (0.75, 0.25) because that pixel is located at three-fourths of the texture's width and one-fourth of its height. These interpolated coordinates are
what get passed to the pixel shader.
纹理(重叠上的区域)是在物理像素的位置采样的(黑点)。纹理坐标不会受光栅化的影响(它们被保留在投影到屏幕空间的原始坐标中)黑点是光栅化的物理像素点的位置。每个像素点的纹理坐标值可以通过简单的线性插值得到:顶点(0,0)就是物理像素(0,0)UV是(0.0,0.0)。像素(3,1)纹理坐标是UV(0.75,0.25)因为像素值是在3/4 纹理宽度和1/4纹理高度的位置上。这些插过值的纹理坐标被传递给了像素着色器。
The texels do not line up with the pi each pixel (and therefore each sampling point) is positioned at the corner of four texels. Because the filtering mode is set to Linear, the sampler will average the colors of the four texels sharing
that corner. This explains why the pixel expected to be red is actually three-fourths gray plus one-fourth red, the pixel expected to be green is one-half gray plus one-fourth red plus one-fourth green, and so on.
每个纹素并不和每个像素重叠,每个像素都在4个纹素的中间。因为过滤模式是双线性。过滤器将会取像素周围4个颜色的平均值。这解释了为什么我们想要的红色实际上确是3/4的灰色加上1/4的红色。应该是绿色的像素点是1/2的灰色加上1/4的红色加上1/4的绿色等等。
To fix this problem, all you need to do is correctly map the quad to the pixels to which it will be rasterized, and thereby correctly map the texels to pixels. Figure 8 shows the results of drawing the same quad between (-0.5, -0.5) and (3.5, 3.5), which is
the quad intended from the outset.
为了修正这个问题,你需要做的就是正确的将矩形映射到像素,然后正确地映射纹素到像素。图8显示了将(-0.5, -0.5) and (3.5, 3.5)的矩形进行纹理映射后的结果。
&&&&&&&&&&&&&&&&&&&&&&&&&&& 图8
In summary, pixels and texels are actually points, not solid blocks. Screen space originates at the top-left pixel, but texture coordinates originate at the top-left corner of the texture's grid. Most importantly, remember to subtract 0.5 units from the x and
y components of your vertex positions when working in transformed screen space in order to correctly align texels with pixels.
总的来说,像素和纹素实际上是点,不是实体的块。屏幕空间原点是左上角的物理像素,但是纹理坐标原点是纹素矩形的最左上角。最重要的是,记住当你要将纹理中的纹素正确的映射到屏幕空间中的像素时,你需要减去0.5个单位
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:19034次
排名:千里之外
原创:14篇
(1)(2)(1)(1)(7)(5)(1)(2)(4)(3)把景物模型的数学描述(显示列表)及其色彩信息转换至计算机显示器上的像素此过程亦称为光栅化.并可应用双缓存技术生成动画。
把景物模型数学描述集色彩信息转换至计算机屏幕上的像素,这个过程称为光栅化.在执行这些步骤过程中,OpenGL可能会执行其他一些操作如消隐处理等。
在数学上,点是理想的、没有大小的;而在光栅显示设备上,像素具有可测量的大小。
把一个矢量图形(如直线,圆)转换为一系列像素点的过程就称为光栅化。
光栅化就是把投影或透视变换(PROJECT)后的图像转换成光栅设备(比如显示器)的坐标并最终显示出来的过程.
什么叫光栅化处理?
就是变成位图(光栅图),Rasterize。
光栅化是将一个图元转变为一个二维图像的过程。二维图像上每个点都包含了颜色、深度和纹理数据。将该点和相关信息叫做一个片元(fragment)。&
  栅是格栅,就是纵横成排的小格.小格小到极至,就是点了.一个图像人可以看一眼就明白了.但是计算机要记录下来就要把这个图像分成一个个小格也就是点阵.点格栅分得越细,图像也就记录得越有细节.
  光栅图也叫做位图、点阵图、像素图,简单的说,就是最小单位由像素构成的图,只有点的信息.缩放时会失真。每个像素有自己的颜色,类似电脑里的图片都是像素图,你把它放很大就会看到点变成小色块了。这种格式的图适合存储图形不规则,而且颜色丰富没有规律的图,比如照相,扫描。
BMP,GIF,JPG等等.格式的文件.重现时,看图软件就根据文件里的点阵绘到屏幕上.或都打印出来.
  与光栅图相对的是,矢量图也叫做向量图,记录的是点、线、面的位置和颜色信息的描述,矢量图没有直接是点的信息,还有线,面,基本图形等信息,但只是描述.重现时看图软件就解读这些描述重绘出来.这样,图形放大不会失真,适合存储像标志、线路图、设计图等,这种格式的优势是放大不失真、占空间小等优点,比如很多flash动画就是矢量绘图。
CAD,PRO_E等的文件
  地图用矢量图来表示比光栅图优势更大,因为地图需要缩放来查看详细的区域,另外,在修改地图时,只需要对原有的矢量信息进行编辑即可,而光栅图就需要重新绘制了,只是矢量图在显示器上显示时,是需要实时运算转换成像素图的,因为显示器本身是像素结构的。
什么是光栅?&
光栅——制作立体图像时所用的一种光学材料。通俗地讲,若干个形状大小一样、光学性能一致的透镜在一平面上按垂直方向顺序排列,就形成光栅条,若干条光栅条按水平方向依次排列,就形成光栅板,通常称为光栅。立体图像就是利用光栅材料的特性,将不同视角的同一拍摄对象的若干幅图像或同一视角的若干幅不同的图像的画面细节按一定顺序错位排列显示在一幅图像画面上,通过光栅的隔离和透射或反射,将不同角度的图像细节印射在人们的双眼,形成立体或变换的果。
从光学表现特征来讲,分为2类:
1、狭缝光栅——通过透射光将图像的立体效果显示在人们的眼前。
2、柱镜光栅——通过反射光将图像的立体效果显示在人们的眼前。
从结构来讲,其中柱镜光栅可分为2种:
1、柱镜光栅板;
2、柱镜光栅薄膜。柱镜光栅薄膜覆盖在不同厚度的透明塑料板或玻璃板上,就成为柱镜光栅板。
光栅的技术指标通常包括:
1、LPI(线数)——每英寸单位所包括的光栅条数,也有人说节距,就是每条光栅的度。LPI与节距的关系可以用下列公式表达:LPI=25.4毫米/节距。
2、视角——形成图像或观察图像时所能允许的角度。一般来讲,视角小,光栅厚,这样的光栅适合制作立体图像。视角大,光栅薄,这样的光栅适合制作变换图像。光栅线数多,适合制作小画面的立体或变换图,适合近距离观看。光栅线数少,适合制作大画面的立体或变换图,适合远距离观看。
已投稿到:
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。

我要回帖

更多关于 光栅化的概念 的文章

 

随机推荐