OK so you have described a Left Handed coordinate system.
This eliminates half of the problems you might have.
What is the Position ? I would expect a Direction, given that you are Normalizing it.
If it was a target position, the vector would be target-source position, normalized.


Here is the latest code, almost exatly what I wanted, except I dont need the Z rotation

FPGetAngles proc uses esi edi lpPos:dword,lpRotation:dword
LOCAL signX,signY:dword
LOCAL rslt:VERTEX
LOCAL q:dword
;++ = 0-90
;+- = 91-180
;-- = 181=270
;-+ = 271-360

mov esi,lpPos
mov edi,lpRotation
; X = Y,Z
; Y = X,Z
; Z = X,Y
invoke Vec_Normalize,addr rslt,lpPos

fld rslt.y
fld rslt.z
fpatan
fstp q
invoke Vec_RadToDeg,q
fstp .VERTEX.x

fld rslt.x
fld rslt.z
fpatan
fstp q
invoke Vec_RadToDeg,q
fstp .VERTEX.y

; fld rslt.x
; fld rslt.y
; fpatan
; fstp q
; invoke Vec_RadToDeg,q
; fstp .VERTEX.z


ret
FPGetAngles endp


Here is the direction I want, from A(0,10,0) to B(0,0,0)

fld FP4(10.)
fstp lamp.Position.x
fld FP4(0.)
fstp lamp.Position.y
fld FP4(-10.)
fstp lamp.Position.z
invoke FPGetAngles,addr lamp,addr lamp.Rotation
Posted on 2012-02-24 06:28:38 by Farabi
Nevermind, I guess I found it


FPGetAngles proc uses esi edi lpPos:dword,lpRotation:dword
LOCAL signX,signY:dword
LOCAL rslt:VERTEX
LOCAL q:dword
LOCAL t:qword
;++ = 0-90
;+- = 91-180
;-- = 181=270
;-+ = 271-360

mov esi,lpPos
mov edi,lpRotation
; X = Y,Z
; Y = X,Z
; Z = X,Y
invoke Vec_Normalize,addr rslt,lpPos
; invoke Vec_Copy,addr rslt,lpPos

fld rslt.y
invoke FpuArcsin,0,0,129
fstp .VERTEX.x

fld rslt.x
fld rslt.z
fpatan
fstp q
invoke Vec_RadToDeg,q
fstp .VERTEX.y

ret
FPGetAngles endp


It solved all of everything I wrote on the paper. I guess this is the final code.
Posted on 2012-02-24 07:23:20 by Farabi
As an aside: Why are you using degrees for angles?
The FPU (as most API functions) can only handle radians.
So you might as well just use radians throughout your code.
It saves the overhead of having to convert things to and from degrees, and also avoids any confusion of what format to use where, since you will be using radians consistently.

Degrees are for high-school students and VB programmers.
Posted on 2012-02-24 16:27:54 by Scali

As an aside: Why are you using degrees for angles?
The FPU (as most API functions) can only handle radians.
So you might as well just use radians throughout your code.
It saves the overhead of having to convert things to and from degrees, and also avoids any confusion of what format to use where, since you will be using radians consistently.

Degrees are for high-school students and VB programmers.



Because Im using glRotatef for the camera. If only I know something better, I might use another else.
Posted on 2012-02-24 16:52:32 by Farabi

Nevermind, I guess I found it


FPGetAngles proc uses esi edi lpPos:dword,lpRotation:dword
LOCAL signX,signY:dword
LOCAL rslt:VERTEX
LOCAL q:dword
LOCAL t:qword
;++ = 0-90
;+- = 91-180
;-- = 181=270
;-+ = 271-360

mov esi,lpPos
mov edi,lpRotation
; X = Y,Z
; Y = X,Z
; Z = X,Y
invoke Vec_Normalize,addr rslt,lpPos
; invoke Vec_Copy,addr rslt,lpPos

fld rslt.y
invoke FpuArcsin,0,0,129
fstp .VERTEX.x

fld rslt.x
fld rslt.z
fpatan
fstp q
invoke Vec_RadToDeg,q
fstp .VERTEX.y

ret
FPGetAngles endp


It solved all of everything I wrote on the paper. I guess this is the final code.


Shooot, it look like was right, but were wrong.
Posted on 2012-02-24 18:02:18 by Farabi
Do yourself a favor, investigate quaternion based cameras.
Matrix based cameras have some problems that will crop up - they have lots of singularities and numerical drift issues.
Quaternion based cameras are a lot more stable, and all the math is faster.
Besides, once you start using 'modern' opengl contexts you'll find that glRotatef is deprecated, and so is glPushMatrix and you have to start creating all your own matrices by hand and passing them in to the shader.
Posted on 2012-02-24 18:37:42 by Homer

Do yourself a favor, investigate quaternion based cameras.
Matrix based cameras have some problems that will crop up - they have lots of singularities and numerical drift issues.
Quaternion based cameras are a lot more stable, and all the math is faster.
Besides, once you start using 'modern' opengl contexts you'll find that glRotatef is deprecated, and so is glPushMatrix and you have to start creating all your own matrices by hand and passing them in to the shader.


Can you point me to a GPU programming tutorial for the next gen OpenGL you mentioned?
Also, can a SiS Mirage3 able to do vertex shading programming? Sounds Chronos wanted us to make our own Renderer.
Thanks for the nice answers.
Posted on 2012-02-24 18:43:38 by Farabi


Nevermind, I guess I found it


FPGetAngles proc uses esi edi lpPos:dword,lpRotation:dword
LOCAL signX,signY:dword
LOCAL rslt:VERTEX
LOCAL q:dword
LOCAL t:qword
;++ = 0-90
;+- = 91-180
;-- = 181=270
;-+ = 271-360

mov esi,lpPos
mov edi,lpRotation
; X = Y,Z
; Y = X,Z
; Z = X,Y
invoke Vec_Normalize,addr rslt,lpPos
; invoke Vec_Copy,addr rslt,lpPos

fld rslt.y
invoke FpuArcsin,0,0,129
fstp .VERTEX.x

fld rslt.x
fld rslt.z
fpatan
fstp q
invoke Vec_RadToDeg,q
fstp .VERTEX.y

ret
FPGetAngles endp


It solved all of everything I wrote on the paper. I guess this is the final code.


Shooot, it look like was right, but were wrong.


Im just realized it, this is worked. What maked it wrong was because ODE is right handed and OpenGL was left handed, so I need to adjust it a little.
Posted on 2012-02-25 00:33:59 by Farabi

Because Im using glRotatef for the camera. If only I know something better, I might use another else.


Roll your own. You're going to have to anyway, if you use a more modern version of OpenGL, with shaders and all.
Or you can use mine: http://sourceforge.net/projects/glux/
Or some other project with math functions and such.
Posted on 2012-02-25 03:44:03 by Scali
Here is a poor start:
http://www.lighthouse3d.com/cg-topics/code-samples/opengl-3-3-glsl-1-5-sample/

As it states, its for OpenGL 3.3 context(pretty new), with shader language 1.5 (old already).

It shows how to hand matrices to the shader, and a little simple shader code.

I know, its c stuff, but just transcribe it in your head into your favorite language ;)

Also, notice that the difference between left and right handed systems is actually the inverse of the matrix, so you can just reverse the order of matrix multiplies and it just works.

Once you start playing with shaders, you will never use that old stuff again. Mostly because you can't, but it's worth it.

So - we end up with three matrices (world, view, proj) which we need to multiply together (can happen inside the shader code), the camera view and world transforms can all be done with quaternions until the very last moment when we turn them into matrices for the shader.
This is what I was hinting at about quaternion based camera.
The projection matrix of course has to stay as a matrix all the time, but the other two we can spit out as a PRODUCT of our camera code, not as input to it.
Posted on 2012-02-25 09:02:43 by Homer

Also, notice that the difference between left and right handed systems is actually the inverse of the matrix, so you can just reverse the order of matrix multiplies and it just works.


No, we've had this discussion before...
You are again confusing handedness with column vectors vs row vectors.
The 'handedness' of a space is determined by how the positive and negative sides of the axes relate to eachother. Eg, if a left-handed space has the positive axis going 'into the screen' (going into the viewing direction), then if you flip the z-axis around, the space becomes right-handed.
Inverting axes is not equivalent to inverting matrices.


(can happen inside the shader code)


In theory yes, but it's not how you should be doing it.
After all, shaders are stateless. The same code is executed for every vertex/pixel, results cannot be buffered/reused.
Which would mean that you are repeating the same multiplies for every vertex or pixel in your scene, rather than just pre-calcing it once on the CPU and passing it as a constant.
Especially on lower-end hardware, where you have very tight instruction limits, you want to avoid any unnecessary code inside the shaders at all cost.
Posted on 2012-02-25 13:21:34 by Scali
You're right about handedness, I wasn't thinking when I posted that.
I was referring to the column major versus row major formalisms and their relationship to the order of multiplications (pre versus post).

As for multiplying matrices in the shader, although certainly there a situations where we want something other than the MVP (aka WVP) matrix, I certainly agree that GENERALLY it's a bad idea to multiply them on the gpu - and particularly if we're talking about the PixelShader. However there are definitely some situations where handing in the component matrices is warranted, one that springs to mind is gpu based culling (geometry shader early rejection) and gpu based instancing and skinning (sheets of dual quaternions instead of world transforms, unpacked on the gpu).
For the typical pixelshader though you are absolutely right, it's a terrible idea, I was mainly pointing out that the matrices we hand in to a shader are OUR matrices, the same ones we create and manage on the cpu, which is a lot different behavior to the oldschool opengl mystical blackbox approach (internal matrix stack, premultiplication as standard, etc).
Posted on 2012-02-25 20:30:59 by Homer
Unfortunately my SiS Mirage 3 Card unable to make OpenGL3 wored. I guess I need to wait the "pinberrypi" device or something, it would be a great standard.
Posted on 2012-02-25 21:13:12 by Farabi

As for multiplying matrices in the shader, although certainly there a situations where we want something other than the MVP (aka WVP) matrix, I certainly agree that GENERALLY it's a bad idea to multiply them on the gpu - and particularly if we're talking about the PixelShader. However there are definitely some situations where handing in the component matrices is warranted, one that springs to mind is gpu based culling (geometry shader early rejection) and gpu based instancing and skinning (sheets of dual quaternions instead of world transforms, unpacked on the gpu).


In many cases you want BOTH. You'd want some of the individual matrices, or just a few matrices concatenated together.
For example, with skinning, this is generally performed either in object space or in world space.
So after the skinning is performed, you'd still want to multiply by a (world*)view*projection matrix.

But again, this is advanced stuff... In general you'll start by just doing everything on the CPU and using the matrices merely as constants in the shaders, performing only matrix*vector operations (or vector*matrix, depending on whatever tickles your fancy... Since you are programming the whole pipeline with shaders, you can go either way).
Posted on 2012-02-26 03:57:09 by Scali