The normalization is the equivalent of the clipping operation, forcing vertices into a unit space, but with some clever hardware optimizing we can predict the result of the w divide and reject early, thus the clipping itself is done in hardware - but the transform into unit space is up to us.
Posted on 2012-09-21 03:06:12 by Homer

The normalization is the equivalent of the clipping operation, forcing vertices into a unit space, but with some clever hardware optimizing we can predict the result of the w divide and reject early, thus the clipping itself is done in hardware - but the transform into unit space is up to us.


You still don't have a clue, do you?
1) There *is* no normalization being done on position data anywhere.
2) Normalizing position data has no geometric meaning whatsoever. It would merely corrupt your meshes on screen. For example, take a cube with reasonably tessellated sides, centered around (0,0,0). If you normalize all its vertices, the result is a (unit) sphere (all vertex positions will be normalized to have the same (unit) length from the origin).
'Forcing vertices into a unit space' makes no sense. If they don't fit, they don't fit.
3) The homogeneous division (which you mistakenly equate to normalization) does not perform clipping. Link to a common polygon clipping algorithm was already provided. The algorithm did not even remotely resemble a homogeneous division.
4) Not all your geometry will always fit into the unit cube, not even after homogeneous division. See example screenshots of triangles and torus object which fall partly off the screen, hence are partly outside the unit cube.
Anyone with a bit of common sense will see that there's no way the clipped triangles in the screenshot can be the result of merely rejecting vertices from a triangle.
In fact, anyone with a bit of common sense will see that rejecting a single vertex from a triangle is equivalent to rejecting the entire triangle.
And with a bit more common sense, they would also see that there is no way to clip a triangle like that by merely performing a division.
5) Part of transforming into unit space is the homogeneous division, which is automatically performed by the fixed-function rasterization stage on the GPU, and as such is not 'up to us'.
6) Clipping does not force vertices into a space. It clips off the parts of the geometry that do not fit in the space, and introduces new geometry (new polygon edges) so that the resulting object DOES fit (without any corruption/warping/etc).

How long are you going to try to keep this up?

In fact, let's make this interesting.
In the second clipped triangle screenshot that I posted earlier, we have a screen of 320x200 resolution.
The projected triangle has the following coordinates in screenspace (x,y):
(-10,20), (50,240), (380,100)

How do we get from these three coordinates (which are all outside the screen area) to the clipped triangle we see on screen?
Which of the vertices do we reject? What do we divide?
Posted on 2012-09-21 03:27:47 by Scali
Isn't it strange that my renderer works at all, given that I am completely wrong :)


// This shader sourcecode belongs at the start of the shader 'main' function,
// it transforms the input ModelSpace vertex position, into WorldSpace, ViewSpace and ClipSpace.
//
const char* szQuaternionShaderMain =
" //modelspace to worldspace\n"
" vec3 vPos_WorldSpace = trans_for(myVertex, s_model); // ModelSpace to WorldSpace, no matrices\n"
" //worldspace to viewspace\n"
" vec3 vPos_ViewSpace = trans_inv(vPos_WorldSpace, s_cam); // WorldSpace to ViewSpace, no matrices\n"
" //viewspace to clipspace\n"
" vec4 vPos_ClipSpace = get_projection(vPos_ViewSpace, myProjectionData); // ViewSpace to ClipSpace, (and again, no matrices)\n";


Posted on 2012-09-22 04:46:40 by Homer

Isn't it strange that my renderer works at all, given that I am completely wrong :)


Not at all, since this is taken care of by the GPU. You just don't understand what it does.
As I said, try writing a renderer entirely in software. You'll figure out soon enough how projection, clipping and rasterizing go together.
Posted on 2012-09-22 04:58:21 by Scali
Since I transform the input ModelSpace vertex into all three Major Spaces, I am easily able to implement things like View Dependent shader algorithms which require the ViewSpace vertex position as input, which is not possible if the transform (matrix) was baked.
Posted on 2012-09-22 05:05:26 by Homer
For UVs, texturespace can be represented as a single quaternion plus a coordinate, instead of a coordinate, normal, tangent and binormal, saving 60% of the memory requirements to send that rotation. The normal(s) can be pulled out in the shader, or sent in addition to the TextureSpace Quaternion.

Posted on 2012-09-22 05:09:21 by Homer

Since I transform the input ModelSpace vertex into all three Major Spaces, I am easily able to implement things like View Dependent shader algorithms which require the ViewSpace vertex position as input, which is not possible if the transform (matrix) was baked.


Congratulations, you managed to completely change topic yet again, and dump in a bunch of gratuitous buzzwords to cloud the issue at hand.
Posted on 2012-09-22 05:11:54 by Scali

For UVs, texturespace can be represented as a single quaternion plus a coordinate, instead of a coordinate, normal, tangent and binormal, saving 60% of the memory requirements to send that rotation. The normal can be pulled out in the shader, or sent in addition to the TextureSpace Quaternion.


Newsflash: Since the basis of a space is orthogonal by definition (orthonormal even, in a strict mathematical sense), it follows that you only need to define two axis for a 3d space. The third axis is implied, since it has to be orthogonal to both given axis.
Ergo, your calculations are off, yet again.
Posted on 2012-09-22 05:14:50 by Scali
I told you I was moving on. I have already proven my code works, had it peer reviewed, and so on.
I'm not talking about just transforms anymore, shaders need other kinds of inputs.
My shaders are constructed based upon input specifications.
In this new GLES based engine, the built in shaders are now all based on pure Quaternion inputs.
But there is still underlying support for Matrix based shaders.
They are all, to some degree, based on input specs.
There's 10 of them, and one of them is a general purpose surfaceshader generator, which has the most input specs ( a vertexformat and an advanced material )

I will now let this thread die.
You can comment on other threads.


Posted on 2012-09-22 05:18:54 by Homer

I told you I was moving on. I have already proven my code works, had it peer reviewed, and so on.


You're still posting in a topic that is about clipping though.
And you still don't know how clipping works.
Posted on 2012-09-22 05:24:00 by Scali
My clipping works just fine, its performed on ClipSpace vertices, which are post-projective. I don't understand why you have a problem with it.

Posted on 2012-09-22 06:56:56 by Homer

My clipping works just fine, its performed on ClipSpace vertices, which are post-projective. I don't understand why you have a problem with it.


Clipping works fine because it's not *your* clipping. It is hardwired in the rasterizer stage of the GPU.
The 'problem' I have is that the question in the OP was about the clipping in the rasterizer stage, and the statements you have made about projection and clipping in this thread are basically complete nonsense.
Posted on 2012-09-22 17:22:00 by Scali
I have said that all along, we do not perform the actual clipping, but described how we would do it in software.
You have confused insight with implementation.
Posted on 2012-09-23 06:05:22 by Homer

I have said that all along, we do not perform the actual clipping, but described how we would do it in software.
You have confused insight with implementation.


Please, drop the hollow rhetoric. I'm not impressed.
Also, just because you're TRYING to confuse everyone to cover up your mistakes, doesn't mean it's actually working.
I'm not confused at all. I pointed out VERY clearly that there are many mistakes in your 'description':


The normalization is the equivalent of the clipping operation, forcing vertices into a unit space, but with some clever hardware optimizing we can predict the result of the w divide and reject early, thus the clipping itself is done in hardware - but the transform into unit space is up to us.


You still don't have a clue, do you?
1) There *is* no normalization being done on position data anywhere.
2) Normalizing position data has no geometric meaning whatsoever. It would merely corrupt your meshes on screen. For example, take a cube with reasonably tessellated sides, centered around (0,0,0). If you normalize all its vertices, the result is a (unit) sphere (all vertex positions will be normalized to have the same (unit) length from the origin).
'Forcing vertices into a unit space' makes no sense. If they don't fit, they don't fit.
3) The homogeneous division (which you mistakenly equate to normalization) does not perform clipping. Link to a common polygon clipping algorithm was already provided. The algorithm did not even remotely resemble a homogeneous division.
4) Not all your geometry will always fit into the unit cube, not even after homogeneous division. See example screenshots of triangles and torus object which fall partly off the screen, hence are partly outside the unit cube.
Anyone with a bit of common sense will see that there's no way the clipped triangles in the screenshot can be the result of merely rejecting vertices from a triangle.
In fact, anyone with a bit of common sense will see that rejecting a single vertex from a triangle is equivalent to rejecting the entire triangle.
And with a bit more common sense, they would also see that there is no way to clip a triangle like that by merely performing a division.
5) Part of transforming into unit space is the homogeneous division, which is automatically performed by the fixed-function rasterization stage on the GPU, and as such is not 'up to us'.
6) Clipping does not force vertices into a space. It clips off the parts of the geometry that do not fit in the space, and introduces new geometry (new polygon edges) so that the resulting object DOES fit (without any corruption/warping/etc).

How long are you going to try to keep this up?

In fact, let's make this interesting.
In the second clipped triangle screenshot that I posted earlier, we have a screen of 320x200 resolution.
The projected triangle has the following coordinates in screenspace (x,y):
(-10,20), (50,240), (380,100)

How do we get from these three coordinates (which are all outside the screen area) to the clipped triangle we see on screen?
Which of the vertices do we reject? What do we divide?
Posted on 2012-09-23 07:21:40 by Scali
I make lots of mistakes, but I know what I'm talking about.
I know what these transforms do and how to apply them , with or without matrices.
Yes, clip space is a unit cube, but your clip space vertices might be outside it, then the gpu will clip them - and?
I'm sure you have a point to make, but you have no ammunition as everything I've said is accurate.
Posted on 2012-09-23 08:47:31 by Homer

I make lots of mistakes, but I know what I'm talking about.


I beg to differ. You *act* like you know what you talk about.
But you don't. Ever heard of Dunning-Kruger?


Yes, clip space is a unit cube, but your clip space vertices might be outside it, then the gpu will clip them - and?


Well, this already is in direct contradiction of things you said earlier, such as:
We don't perform the final operation - clipping - it's done by the hardware, based strictly on the value of the W field.
But if we did want to do it by hand, we would simply have to 'Dehomogenize' (kinda Normalize) the 4D vector, to turn it back into a valid 3D vector, which is just to divide everything by W - the result is a vector X,Y,Z(,1) where XYZ are all between -1 and +1


Or this:
ClipSpace is a volume from (-1,-1,-1) to (+1,+1,+1) , and when we Normalize the Homogenous ViewSpace Coordinate, the result is mapped to that 'unit cube' range. The hardware performs this division by w, and rejects vertices that are outside the unit cube after division by w.
We are responsible for everything, including foreshortening, with the exceptions ONLY, of division by W, and rejection (and in fact it's faster to do the rejection ourselves in a Geometry Shader, but that's another story).


Or this:
The normalization is the equivalent of the clipping operation, forcing vertices into a unit space



I'm sure you have a point to make, but you have no ammunition as everything I've said is accurate.


Not at all. Pretty much everything you've said was inaccurate, and a lot of that was downright wrong. I've already listed your mistakes multiple times, you just keep ignoring them.
The point I'm making is:
1) Your knowledge of 3d is very limited.
2) The limits of your knowledge are so narrow that you don't even manage to see the bigger picture to understand what your limitations are.
3) Even when your limits are pointed out to you, you are too arrogant to admit your mistakes, and just ignore and deny everything.
4) You've been pushing and insulting me for years, on your pathetic ego-trip. Usually I just ignored you, or merely stuck to the technical side of a discussion, trying to correct your misinformation for those who may be reading these threads to get answers to their questions.
But I've grown tired of your insults and your arrogance.
I could name plenty of examples of your personal digs...
Take this for example, where you claim I've only copied code from examples:
http://www.asmcommunity.net/board/index.php?topic=29358.msg210020#msg210020

Or here, where you keep nagging about my vertex skinning code: http://www.asmcommunity.net/board/index.php?topic=29617.msg210437#msg210437
I stated clearly that I was porting old (working) code to my new codebase, so saying I 'only just got it working' is a bit weird...? Then you keep on nagging about the Microsoft DXMesh stuff, while I've clearly stated that I use my own code (DXMesh does not even exist in DX10/11 or OpenGL in the first place, besides, the OpenGL code is open source. More weirdness). Then you start nagging about the fact that I use 5 'texcoords' (actually just interpolated vertex data), again while the code is right in front of you.

You have a serious attitude problem. You and I are not on the same level, and I am no longer going to protect you against yourself.
Seems like you need to be taught a lesson. You keep pushing me for some kind of 'pissing match' to see who the hottest 3d programmer is. I think everone but you already knows that you're not it. Usually I've just ignored your nudges, but well... Pointing out just how little you actually know (such as in this thread) may be just the lesson you need.
I just underestimated your pigheaded-ness. Even with clear evidence in plain sight, you remain in denial. Seems like a common trait in Australians.

Want to show everyone how hot you really are? You can start by answering the question I posed earlier, on how the triangle would actually be clipped to fit screen space.
Then you could write a complete software renderer for 286 with EGA to prove that you know all about rasterizing and clipping, and that you can do better than me.
Because anyone can use D3D and OpenGL and modify some example code and get something halfway working on screen. Writing your own software renderer (especially one that only uses a 16-bit integer CPU) takes actual understanding and skill.
Posted on 2012-09-23 10:25:57 by Scali
I'll see ur demo and raise u a game engine!
Your fixed ideas are based on a jaded understanding of the transform pipe.
If you want to clip in software, you *WILL* need to do the w divide yourself, and reject vertices outside of the unit space. Nothing I have said there is inaccurate.
I absolutely am not interested in a pissing match, this is what I do for a living, it's been a few years since I treated it as a hobby. I think you should definitely buy GPU Pro 3 and check out the chapter by Juraj Vanek regarding the transform pipe. He's one of the guys I correspond with lately.
Have a nice day :)
Posted on 2012-09-27 00:49:09 by Homer

Your fixed ideas are based on a jaded understanding of the transform pipe.
If you want to clip in software, you *WILL* need to do the w divide yourself, and reject vertices outside of the unit space. Nothing I have said there is inaccurate.


You really are a broken record.
Still that arrogance, still thinking you know better. Well, you don't, and you're still wrong.
Anyone who has read this thread can tell you that what you're saying is nonsense.
They may not know exactly how the pipeline works, but common sense will tell them that if you take these three vertices:
(-10,20), (50,240), (380,100)
There is NO value of W that results in a polygon with 6 edges such as this:


It's basic common knowledge that will tell them that dividing a vertex can't add new edges, let alone rejecting vertices.
I asked you specifically how it would be done. You couldn't answer that either.
So I hereby appeal to the common sense of people reading this thread: please chime in and say you don't buy Homer's nonsense.
Posted on 2012-09-27 02:55:39 by Scali
All I can say is that you are indeed a moron, and you do not understand homogenous 4-vectors.
It's not my job to show you, so screw you, have a nice life doing demos, and try not to make people more stupid because of your shortcomings.
Posted on 2012-09-27 06:59:39 by Homer

All I can say is that you are indeed a moron, and you do not understand homogenous 4-vectors.


I'm not sure what any of this has to do with my understanding of homogeneous 4d-vectors.
You don't necessarily NEED homogeneous 4d-vectors to do this sort of clipping. The screenshot here was taken from my 286-renderer, which as I stated, does not use 4d vectors. Since it does not need a per-pixel z-value, you can re-use the z in the same way as the w in a 4d-homogeneous vector. So in this case I basically use 3d-homogeneous vectors.

However, I am no stranger to 4d-homogeneous vectors either, obviously. My Java engine uses a conventional zbuffer approach, and as such the pipeline is modeled after the D3D8 pipeline. However, the clipping itself is still very similar to what the 286 code does.
And obviously I've used the D3D and OpenGL pipelines as well.
Also, here I derive the projection matrices in left-handed and right-handed notation for D3D and OGL: http://www.asmcommunity.net/board/index.php?topic=30124.msg212786#msg212786
To which you responded:

That's a great post, Scali :)
At some point, I know I'm going to search here for that info (my brain is more like a sieve each year), and I certainly haven't ever posted that before, I figure this will be the top result :)


So you see, in the ~20 years I've been doing graphics, I have written a number of 3d pipelines (early 3d accelerated stuff also required custom software pipelines for better performance/more visual trickery), including clippers and everything. If you want to see one of the pipelines, you can just look at the OpenGL renderer at http://sourceforge.net/projects/bhmfileformat/
It runs entirely on my own matrix code and shaders.
If you really want, I can also show you the pipelines and clippers I wrote in Java, for Amiga, GP2X, 486, 286 etc... But I don't think anyone doubts that they work, because people have been able to download and run the binaries.

So really, I'm not the one who has anything to prove here. You are, however. You cannot even answer a simple question of how a triangle is clipped. Let alone that you would be able to implement a proper clipper in an actual pipeline. So I'm not sure why you keep pretending that you know how it works, and just downright insult me for no reason.
Reality bites, doesn't it?

Also, dila responded to this thread earlier.
He is the guy who wrote PolySynth: http://www.movss.com/~rich/Code/PolySynth?action=print
Another software renderer. Where he had to implement the whole pipeline, and clipping. And he doesn't agree with you either:

What Scali is saying is correct.

Clipping is not just about rejection - removing entire polygons from the pipeline, but also about cutting the polygons so they fit within the view frustum. Sometimes clipping a single polygon will result in multiple polygons being generated.

The frustum clipping region is hardcoded in hardware - hence the projection matrix - to scale the scene to fit inside. In this way the programmer moves the geometry in the scene, instead of moving the clipping planes around the geometry.

The clipping is performed after the projection matrix is applied (for the reason described above), but before the perspective divide (division by W) that Scali talked about.

The reason the programmer does not perform the perspective divide himself is because the 4D vertices that are produced from the perspective matrix multiplication are passed onto the clipper. The clipper then operates on these vertices in 4D space, without the so-called "normalization" that  Homer spoke of.


So that is two people, who have actually proven that they are capable of implementing a full pipeline with clipping, against one person who has not...
Posted on 2012-09-27 08:46:21 by Scali