I read many books saying " clipping is performed under camera context before passing all vertice to perspective projection matrix " it's justifialbe and reasonable because this can save unnecessary calculation...

however when I read D3D 8/9 documentation...it clearly said that the clipping stage is done after all geometry transformation...

so what's the truth????? why D3D does clipping after perspetive projection transformation??

Plz help me and tks...!!

however when I read D3D 8/9 documentation...it clearly said that the clipping stage is done after all geometry transformation...

so what's the truth????? why D3D does clipping after perspetive projection transformation??

Plz help me and tks...!!

Clipping is applied after viewport transforms because they affect what you're going to see, and where you're going to see it from. Clipping is performed apon the viewing frustrum, which is what is being transformed here rather than its content... its a mathematical transposition performed before frustrum culling. We don't just rotate and translate everything in the world and then perform clipping - we rotate and translate the actual viewing frustrum in a static world, and then determine what's inside it, and THEN we transform whats inside it. Make sense?

Anyway, thats the theory, that we only perform math on whats going to be inside the transformed viewing frustrum. At any rate, you don't need to worry about any of this unless you really want to.

Anyway, thats the theory, that we only perform math on whats going to be inside the transformed viewing frustrum. At any rate, you don't need to worry about any of this unless you really want to.

First, thank you!

I do know the theory in your comment but i don't think it is related to my question.

I just wonder the clipping is performed before or after the projection translation...

many book say it is "before" but D3D documentation says it is 'AFTER"

thanks!

I do know the theory in your comment but i don't think it is related to my question.

I just wonder the clipping is performed before or after the projection translation...

many book say it is "before" but D3D documentation says it is 'AFTER"

thanks!

maybe here i say "clipping" which should be "frustrum culling"

i.e clip all things out of the viewing frustrum...I mean that...maybe i use wrong word...it's different?

i.e clip all things out of the viewing frustrum...I mean that...maybe i use wrong word...it's different?

IIRC the clipping of the viewing frustrum is the application's responsibility. That is, the app will only pass to D3D only those triangles it is pretty sure will be displayed. You might, for instance, try to pass a triangle whose vertices have negative Z, which should be behind the viewer.... See what happens ;).

D3D is only concerned with displaying those triangles. If a corner is outside of the screen, D3D must still draw the part of the triangle that is within the screen. It will have to clip the triangle's corner that is not in the screen. This is the clipping D3D is interested in.

D3D is only concerned with displaying those triangles. If a corner is outside of the screen, D3D must still draw the part of the triangle that is within the screen. It will have to clip the triangle's corner that is not in the screen. This is the clipping D3D is interested in.

Super-late reply, but the issue here is that both are valid.

For software renderers it is generally best to cull/clip/reject triangles as soon as possible.

This can be done before projection (the sooner you can reject triangles, the less processing you have to do regarding transforms, lighting, projection, rasterizer setup etc).

With hardware acceleration, perspective projection is generally 'free', so it doesn't really matter. Hence, most hardware implementations opt to clip after projection, because it makes the clipping itself simpler.

Some of my own software renderers use a hybrid approach: They will clip against the znear/zfar planes before projection, then clip against the screen rectangle after projection.

The reason for this is improved stability: clipping against znear/zfar can be rather tricky in post-perspective space, since z is no longer linear. Likewise, clipping against the screen rectangle before projection may give some rounding problems after projection (z does not have to be clipped pixel-perfect, but x/y does).

For software renderers it is generally best to cull/clip/reject triangles as soon as possible.

This can be done before projection (the sooner you can reject triangles, the less processing you have to do regarding transforms, lighting, projection, rasterizer setup etc).

With hardware acceleration, perspective projection is generally 'free', so it doesn't really matter. Hence, most hardware implementations opt to clip after projection, because it makes the clipping itself simpler.

Some of my own software renderers use a hybrid approach: They will clip against the znear/zfar planes before projection, then clip against the screen rectangle after projection.

The reason for this is improved stability: clipping against znear/zfar can be rather tricky in post-perspective space, since z is no longer linear. Likewise, clipping against the screen rectangle before projection may give some rounding problems after projection (z does not have to be clipped pixel-perfect, but x/y does).

Regarding transforms and clipping in Shader-oriented render pipes: in the Vertex Shader stage.

The input vertex (one at a time!) is given in ModelSpace.

The vertex is first transformed from ModelSpace to WorldSpace (typically via a matrix called WorldMatrix). Then it is transformed from WorldSpace to Camera ViewSpace (typically via a matrix called ViewMatrix). Finally, it is transformed from ViewSpace to ClipSpace - this is where 'Projection Matrix' is applied - this guy contains two things: an XYZ scaling value, and a translation in Z. The result is a vertex in ClipSpace, but the vertex is now a 'homogenous 4D vertex' which means there is a valid W field in the vertex.

We don't perform the final operation - clipping - it's done by the hardware, based strictly on the value of the W field.

But if we did want to do it by hand, we would simply have to 'Dehomogenize' (kinda Normalize) the 4D vector, to turn it back into a valid 3D vector, which is just to divide everything by W - the result is a vector X,Y,Z(,1) where XYZ are all between -1 and +1 :)

Regarding transforms and clipping in Shader-oriented render pipes: in the Vertex Shader stage.

*After* the vertex shader stage, to be exact.

Or actually, in a modern D3D11 pipe, after vertex, hull/domain and geometry shader stages.

Basically at the point where they move from vertex processing to pixel processing (all vertex processing is done before projection). There is some fixed-function hardware which takes care of clipping and the actual perspective divide (projection), and then the rasterization.

This is known as the 'rasterizer stage' in the D3D pipeline, see here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205125(v=vs.85).aspx

Then the individual pixels are handed off to the pixel shaders.

The projection isnt handled for you, you provide the transform for that.

Same in opengl.

The projection matrix contents are just a scale that accounts for the aspect ratio you nominated and the frustum shrinkage (foreshortening), and a translation that moves the scene so the nearplane is at one end instead of in the middle.

You need to perform that part yourself, only clipping occurs outside the VS.

Result of WVP * vertex is NOT clip space, until we do the homogenous divide by w, which we do not do ourselves in shaders.

It was in directx that I actually started to understand the three major transforms that occur in the vertex shader - opengl had confused me with its notion of a 'modelview' matrix (which is simply, world*view) - our final transform is world*view*proj*vertex

VS is highly overlooked in modern shaders, everyones looking at pixel effects, but theres some effects to be had in the VS stage too

Same in opengl.

The projection matrix contents are just a scale that accounts for the aspect ratio you nominated and the frustum shrinkage (foreshortening), and a translation that moves the scene so the nearplane is at one end instead of in the middle.

You need to perform that part yourself, only clipping occurs outside the VS.

Result of WVP * vertex is NOT clip space, until we do the homogenous divide by w, which we do not do ourselves in shaders.

It was in directx that I actually started to understand the three major transforms that occur in the vertex shader - opengl had confused me with its notion of a 'modelview' matrix (which is simply, world*view) - our final transform is world*view*proj*vertex

VS is highly overlooked in modern shaders, everyones looking at pixel effects, but theres some effects to be had in the VS stage too

The projection isnt handled for you, you provide the transform for that.

Same in opengl.

I am talking about the division by W ('perspective divide'), which effectively does the perspective projection (projection of a 3D world onto a 2D surface... you do know what projection means in this context, don't you?), provided you have set up your vertices properly for that.

Using the legacy pipeline, yes, you could provide a projection transform matrix for that directly to the API. When using shaders, it's pretty much your own responsibility that the resulting vertices have a correct homogeneous format. How you do that, is up to you.

The actual division is done by the fixed-function hardware, not in your shader (although you could do that, since if you pass the vertices with W=1, the division done in the fixed-function part of the pipeline effectively does nothing).

The MSDN page I linked to even says as much:

Rasterization includes clipping vertices to the view frustum,

**performing a divide by z to provide perspective**, mapping primitives to a 2D viewport, and determining how to invoke the pixel shader.

But what you're saying is confusing. A projection matrix does not perform the perspective projection itself. The division by W does that (matrix operations only involve multiplication and addition, there is no way to perform such a division directly with a matrix transform). The role of the projection matrix is to set up your vertices in such a way that the division by W will do the actual projection.

In general this means that the z-coordinate will be copied to w (a bit confusing... the above MSDN quote says you divide by z, which effectively you do, except you use the w coordinate). This then frees up the z-coordinate itself, so it can be scaled and translated to a range that is suitable for the zbuffer (generally it will set it up so at znear, z=0 and at zfar, z=1). A projection matrix will generally also adjust the x:y aspect ratio, so the proper horizontal and vertical FOV will result from the 2d projection.

It was in directx that I actually started to understand the three major transforms that occur in the vertex shader - opengl had confused me with its notion of a 'modelview' matrix (which is simply, world*view) - our final transform is world*view*proj*vertex

Perhaps you should try to write a simple 3d renderer entirely in software sometime. It seems there are still a few blank spots in your knowledge of the 3d pipeline.

Really, what drug are you on?

I already do all this stuff WITHOUT matrices, IN shader, or not.

I do not need lessons, not from you! lol

I already do all this stuff WITHOUT matrices, IN shader, or not.

I do not need lessons, not from you! lol

Really, what drug are you on?

I was wondering the same thing... Nearly everytime I post something, you have to reply with some semi-related stuff, often filled with confusing info or half-truths.

Why did you even bother to respond to this thread at all? And what exactly were you trying to add with your comment that was not already in the post that I made, and the MSDN-link I referred to?

It seems you have some kind of uncontrollable urge to prove that you know everything better than other people.

The sad part is that the posts you then make prove the opposite.

For example:

"in the Vertex Shader stage"

False, in the rasterizer stage, see http://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx

"We don't perform the final operation - clipping - it's done by the hardware, based strictly on the value of the W field.

But if we did want to do it by hand, we would simply have to 'Dehomogenize' (kinda Normalize) the 4D vector, to turn it back into a valid 3D vector, which is just to divide everything by W - the result is a vector X,Y,Z(,1) where XYZ are all between -1 and +1"

But if we did want to do it by hand, we would simply have to 'Dehomogenize' (kinda Normalize) the 4D vector, to turn it back into a valid 3D vector, which is just to divide everything by W - the result is a vector X,Y,Z(,1) where XYZ are all between -1 and +1"

Very confusing. The way your sentence reads is that dividing by W will do the clipping, ensuring that XYZ are always between -1 and +1.

Obviously you can only ensure that XYZ are within a certain range after you actually clipped them (and I don't even want to get into the fact that OpenGL and D3D don't use the same range for z, so the range may also be 0..1 rather than -1..1). That's the whole reason why we need to do clipping in the first place!

And clipping is not based strictly on W, obviously the X,Y,Z values of the vertices are also important, as they contain the actual position of the vertex in 4D space.

Now I'm not sure whether you actually think that dividing by W does the clipping, or what it is exactly that you were trying to say, but it isn't helping any.

You also didn't bother to explain where the W coordinate actually came from, which would make the W division actually mean anything useful at all.

In case anyone still wonders what actual polygon clipping entails, here is a common algorithm: http://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman

I use a variation of that in all my software renderers.

"The projection isnt handled for you, you provide the transform for that."

More confusion, as I already pointed out above: a projection transform matrix is meant to set up the vertices for the projection. It does not perform the projection itself. That is the division by W (which is the original z coordinate of the vertex).

"The projection matrix contents are just a scale that accounts for the aspect ratio you nominated and the frustum shrinkage (foreshortening), and a translation that moves the scene so the nearplane is at one end instead of in the middle."

Again, too vague. Still ignoring what the projection matrix does to set up W. Not helping anything.

"You need to perform that part yourself, only clipping occurs outside the VS."

Clipping *and* perspective divide by W (unless you are still under the impression that they are the same thing?)

"It was in directx that I actually started to understand the three major transforms that occur in the vertex shader - opengl had confused me with its notion of a 'modelview' matrix (which is simply, world*view) - our final transform is world*view*proj*vertex"

That is just one possible way.

Another step that is hidden from these transforms is the scaling/translating to viewport. In D3D/OpenGL this is done after clip space, and there is no direct corresponding matrix in the API.

When you write a software renderer however, you might want to choose to not have a unit cube for clip space, but instead clip directly to viewport coordinates.

In my Java renderer therefore, I had world*view*proj*

**viewport***vertex.

This way the vertices would be output directly in 0...width and 0..height range, then run through the clipper, and the resulting polygons from the clipper could be rendered directly to screen, saving extra multiplications to scale up each polygon after clipping.

My 286 renderer on the other hand does not use matrices at all, and in fact does not even use 4D homogeneous coordinates. Since I do not require a zbuffer, I do not need to scale z to a 0..1 range, and therefore I can perform the perspective divide directly with the z coordinate.

So, there are many ways to get similar results, just as the original question: Clipping can happen at various places in the pipeline. In theory you could even clip in object space. You just need to translate the 6 planes of your frustum back from clipspace into object space, and then clip directly against them.

This is not a common approach however, because of rounding issues.

And as said before, with 3d hardware, be it using D3D or OpenGL, the clipping is hardwired in the rasterizer stage, so after geometry processing.

Clipping *and* perspective divide by W (unless you are still under the impression that they are the same thing?)

They ARE the same thing... except you are mistaken in thinking that dividing by W performs perspective divide, thats done earlier by us using the scale portion of the projection transform.

The act of dividing XYZ by W normalizes the result - its in unit space.

The clipping is performed by early rejection based on the W value by hardware BEFORE it performs the divide.

Everythings pretty easy when you are just rolling together matrices with no idea what they do.

They ARE the same thing... except you are mistaken in thinking that dividing by W performs perspective divide, thats done earlier by us using the scale portion of the projection transform.

The act of dividing XYZ by W normalizes the result - its in unit space.

The clipping is performed by early rejection based on the W value by hardware BEFORE it performs the divide.

No, they are NOT the same.

Clipping is not just rejection. Polygons are clipped, and not only to Z, but also to X and Y. They are only rejected if the polygon is entirely outside the unit cube in clipspace (which stands for the viewing frustum after projection).

If the polygon intersects any of the planes of the unit cube, the polygons are clipped to fit inside. Hence the name: clipping, as in what you'd do with scissors. Not 'rejecting'. As done by an algo such as the Sutherland-Hodgman I've linked to earlier.

Attached is a picture of a clipped object. The polygons are clipped to EXACTLY the near-plane, and also the planes at the sides of the screen, to avoid any pixels from falling outside the framebuffer. As you can see, the cut is 'pixel-perfect' to the near-plane and sides of the screen. If it would only reject polygons, you would clearly see the faceted contours of the remaining polygons.

Oh, and you're still wrong about that projection matrix as well. How can a scaling operation with a static value in a matrix give a perspective effect that changes on distance? Indeed, it cannot. You need a division by a depth-value to get that perspective effect.

You know that old saying, that it's better for people to think you're an idiot than to speak and take away all doubt?

Also, not sure what you're trying to argue here? The screenshot here was taken from one of my software renderers. I wrote all code myself, including the perspective projection and the clipping (which can both clearly be seen to be working). And you're trying to argue that *I* don't know how they work?

What Scali is saying is correct.

Clipping is not just about rejection - removing entire polygons from the pipeline, but also about cutting the polygons so they fit within the view frustum. Sometimes clipping a single polygon will result in multiple polygons being generated.

The frustum clipping region is hardcoded in hardware - hence the projection matrix - to scale the scene to fit inside. In this way the programmer moves the geometry in the scene, instead of moving the clipping planes around the geometry.

The clipping is performed after the projection matrix is applied (for the reason described above), but before the perspective divide (division by W) that Scali talked about.

The reason the programmer does not perform the perspective divide himself is because the 4D vertices that are produced from the perspective matrix multiplication are passed onto the clipper. The clipper then operates on these vertices in 4D space, without the so-called "normalization" that Homer spoke of.

Clipping is not just about rejection - removing entire polygons from the pipeline, but also about cutting the polygons so they fit within the view frustum. Sometimes clipping a single polygon will result in multiple polygons being generated.

The frustum clipping region is hardcoded in hardware - hence the projection matrix - to scale the scene to fit inside. In this way the programmer moves the geometry in the scene, instead of moving the clipping planes around the geometry.

The clipping is performed after the projection matrix is applied (for the reason described above), but before the perspective divide (division by W) that Scali talked about.

The reason the programmer does not perform the perspective divide himself is because the 4D vertices that are produced from the perspective matrix multiplication are passed onto the clipper. The clipper then operates on these vertices in 4D space, without the so-called "normalization" that Homer spoke of.

ClipSpace is a volume from (-1,-1,-1) to (+1,+1,+1) , and when we Normalize the Homogenous ViewSpace Coordinate, the result is mapped to that 'unit cube' range. The hardware performs this division by w, and rejects vertices that are outside the unit cube after division by w.

We are responsible for everything, including foreshortening, with the exceptions ONLY, of division by W, and rejection (and in fact it's faster to do the rejection ourselves in a Geometry Shader, but that's another story).

We are responsible for everything, including foreshortening, with the exceptions ONLY, of division by W, and rejection (and in fact it's faster to do the rejection ourselves in a Geometry Shader, but that's another story).

ClipSpace is a volume from (-1,-1,-1) to (+1,+1,+1) , and when we Normalize the Homogenous ViewSpace Coordinate, the result is mapped to that 'unit cube' range.

Firstly, in D3D (the API mentioned in the OP here) clipspace is (-1,-1,0) to (+1,+1,+1) (see here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb147302(v=vs.85).aspx)

Secondly, you seem to be confusing normalization with homogeneous projection.

Normalization is the process of scaling a vector to unit length: http://en.wikipedia.org/wiki/Normalized_vector

Division by W would only be a normalization if the W-coordinate would contain the euclidean length of the 4D vector. But does it? No, it does not. It contains the z-value, as has been established earlier (one of the things the projection matrix does is to copy the z-value to w).

Now clearly, the z-value is not the euclidean length in the general case.

Besides, what sense would it make to treat all vertex positions in an object as vectors from the origin to that position, and then normalizing them?

Here is a page that explains projection in some detail: http://www.mvps.org/directx/articles/linear_z/linearz.htm

Or also this: http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection

Explaining the division by z to project to the z=1 plane.

In other words, the division by w *is* the foreshortening operation. Which is *not* our responsibility.

The hardware performs this division by w, and rejects vertices that are outside the unit cube after division by w.

Yes, we've established that. But you still seem to be in denial about the fact that the hardware also clips polygons that intersect any of the faces of the unit cube.

A simple test you could do is to take a very simple object, eg a cube, rotate it around while it intersects the near-plane (much as I've done in the screenshot above).

Now, each side of the cube would consist of only two triangles. So if there is no clipping, as you claim, then these triangles will either be rendered completely, or rejected completely (you speak of vertex rejection, but a triangle already has the minimum number of vertices for a polygon, so as soon as one of them gets rejected, you no longer have a polygon, but merely a line (which is infinitely thin, which, in theory, even if it would be rendered as a degenerate polygon, would result in no pixels being drawn at all, given D3D/OGL rasterization rules)).

However, what you will actually be seeing is that the triangles will be clipped to the near-plane, and you will still see the part of the triangles that are behind the near-plane (cutting a small hole in that side of the cube, rather than rejecting the whole side at once). After clipping, the polygons will actually have MORE vertices than they did before. So where have these come from, if all you can do is reject vertices?

Then again, it should already have been obvious that something like this is happening at all sides of the screen as well. Namely, if you would take the cube and move it so that it partly falls off of any side of the screen (again, as I've done in my screenshot), you will see that the triangles will not be rejected as soon as a part of them will fall off the side, but instead the triangle is clipped to the screen rectangle, and the part of the triangle that is still on screen will still be drawn.

So far you have completely ignored this part of the rendering pipeline, even though I've brought it up many times, and even provided a screenshot of polygons actually being clipped to various planes of clipspace. Comment on that.

Attached are two new screenshots, this time showing a single triangle, which is being clipped to the sides of the screen.

As you can see, the first one actually has 5 vertices now, rather than 3, since two of the corners of the triangle have been replaced with new edges (after clipping against the x=-1 and x=1 planes of the unit cube)., each replacing a single vertex with two vertices.

The second triangle even has 6 vertices, since it is also being clipped to the bottom of the screen (y=-1 plane).

Well, Homer has had enough time to respond. I will take his silence as a sign of agreement, and also a sign of not being man enough to admit one's mistakes, let alone to apologize for his cheeky, even insulting tone.

"Everythings pretty easy when you are just rolling together matrices with no idea what they do."

Well, you've demonstrated as much, Homer. Oh, the irony.

"Everythings pretty easy when you are just rolling together matrices with no idea what they do."

Well, you've demonstrated as much, Homer. Oh, the irony.

Why would I care if you follow my blog, where all this was already covered?

Have a nice day.

Have a nice day.

Why would I care if you follow my blog, where all this was already covered?

Have a nice day.

You didn't cover clipping anywhere. You clearly don't understand the whole rasterizing part of the pipeline. You've proven that much in this thread.

I think it's time to drop the all-knowing attitude and get back down to earth. Nobody buys it anyway.