i was thinking about making some code to render 3d objects onto a form (not for a game really, just for learning purposes) and im not really sure how to handle this.

i was thinking of making a userpoint and there is an object (cube).

what i do is the following:

i pretend there is a screen in the middle between the userpoint and the object, then i draw lines (not really draw but for the explanation) between every point of the cube and the userpoint.

then i know where these lines hit the "virtual screen" and this is how i calculate 3D coordinates to 2D but im not satisfied really with my calculation.

can someone help me with idea's/texts about software 3D rendering? (btw i dont suck at math so that wont be a problem)

ps. ill make a picture of my calculations to make things more clear, ill post it here when i did that.

Scorpie

i was thinking of making a userpoint and there is an object (cube).

what i do is the following:

i pretend there is a screen in the middle between the userpoint and the object, then i draw lines (not really draw but for the explanation) between every point of the cube and the userpoint.

then i know where these lines hit the "virtual screen" and this is how i calculate 3D coordinates to 2D but im not satisfied really with my calculation.

can someone help me with idea's/texts about software 3D rendering? (btw i dont suck at math so that wont be a problem)

ps. ill make a picture of my calculations to make things more clear, ill post it here when i did that.

Scorpie

the vertex J (x;y;z) gets shown on the screen on coordinates (sx;sy)

FOV - Focus Of View is like the camera's zoom. You select what FOV to use. Let's say FOV=100 and J(7,15,10)

The equasion for converting 3D to 2D is

sx = FOV*x/z

sy = FOV*y/z

Thus, you'll get a dot on the screen with coordinates (70;150)

sx = 100*7/10

sy = 100*15/10

You need 3 or more vertices to form a polygon, let's assume we'll be working only with triangles

Calculate where the three vertices of the triangle are on the screen, and draw a 2D polygon there. That's the basics. It's simple :) . I won't explain texturemapping and lights, but you can achieve some 3D look in the form with just using these simple equasions and Win32's

BOOL Polygon(

HDC hdc, // handle to device context

CONST POINT *lpPoints, // pointer to polygon's vertices

int nCount // count of polygon's vertices

);

invoke Polygon,hdc,addr MyThreeScreenPoints,3

Obtain the hdc by

invoke GetDC,hwndMyForm

mov hdc,eax

You'll have to select different solid brushes for each different color into the hdc, to achieve a bit more reallistic-looking render, and select a pen with the same color-so that you don't see the wireframe.

If you want to advance to lightning - it'll be easy - since it's flat shading (polygons with solid color), you'll just need to change that color according to the distance/direction of the light and the polygon's rotation. This way you can get cool output.

If you want to advance even further - to texturing or smoothly changing fill colors (gourad shading), you'll need to write your own polygon routine, and have faster access to bitmap bits - using a DIB. That's in case you don't want to touch DirectX or OpenGL. Luckily, there are dozens of tutorials to get you started with that :-D

FOV - Focus Of View is like the camera's zoom. You select what FOV to use. Let's say FOV=100 and J(7,15,10)

The equasion for converting 3D to 2D is

sx = FOV*x/z

sy = FOV*y/z

Thus, you'll get a dot on the screen with coordinates (70;150)

sx = 100*7/10

sy = 100*15/10

You need 3 or more vertices to form a polygon, let's assume we'll be working only with triangles

Calculate where the three vertices of the triangle are on the screen, and draw a 2D polygon there. That's the basics. It's simple :) . I won't explain texturemapping and lights, but you can achieve some 3D look in the form with just using these simple equasions and Win32's

BOOL Polygon(

HDC hdc, // handle to device context

CONST POINT *lpPoints, // pointer to polygon's vertices

int nCount // count of polygon's vertices

);

invoke Polygon,hdc,addr MyThreeScreenPoints,3

Obtain the hdc by

invoke GetDC,hwndMyForm

mov hdc,eax

You'll have to select different solid brushes for each different color into the hdc, to achieve a bit more reallistic-looking render, and select a pen with the same color-so that you don't see the wireframe.

If you want to advance to lightning - it'll be easy - since it's flat shading (polygons with solid color), you'll just need to change that color according to the distance/direction of the light and the polygon's rotation. This way you can get cool output.

If you want to advance even further - to texturing or smoothly changing fill colors (gourad shading), you'll need to write your own polygon routine, and have faster access to bitmap bits - using a DIB. That's in case you don't want to touch DirectX or OpenGL. Luckily, there are dozens of tutorials to get you started with that :-D

hehe, thanks, my calculation was like 5x longer, ill figure this out with math to see how it works, thanks again ill go work with it now.

massive edit: sorry for the post that was here, it was a result of my own stupidity, i have discovered the awnser myself :)

thanks for the code, i found a good tutorial which says kinda the same technique and continues into filling and lightning and stuff (they use directdraw but im just there for the theory)

btw, what is a realistic value for FOV

massive edit: sorry for the post that was here, it was a result of my own stupidity, i have discovered the awnser myself :)

thanks for the code, i found a good tutorial which says kinda the same technique and continues into filling and lightning and stuff (they use directdraw but im just there for the theory)

btw, what is a realistic value for FOV

Common to set the FOV to 90 degrees , ie pi/2, but do play around with the fov ratio and find out what looks nice for you :)

To be perfectly frank, there is an ideal value for FOV known as the Golden Mean, worth reading on this, it was the basis of much ancient art and architecture...

To be perfectly frank, there is an ideal value for FOV known as the Golden Mean, worth reading on this, it was the basis of much ancient art and architecture...

thanks ill look it up

if i take a low value for FOV i get weird results when i slide the object to the side (see attachement, it is with FOV 100)

if i take it somewhere near 500 it looks more natural, is this normal?

the locations of my cube are like this:

and i move the cube by adding a translation value to X for example and then i recalculate the screen positions.

edit: forgot attachement

if i take a low value for FOV i get weird results when i slide the object to the side (see attachement, it is with FOV 100)

if i take it somewhere near 500 it looks more natural, is this normal?

the locations of my cube are like this:

```
-50, -50, 50
```

-50, 50, 50

50, 50, 50

50, -50, 50

-50, -50, 150

-50, 50, 150

50, 50, 150

50, -50, 150

and i move the cube by adding a translation value to X for example and then i recalculate the screen positions.

edit: forgot attachement

Afternoon, Scorpie.

Maybe use DX instead?

Cheers,

Scronty

i was thinking about making some code to render 3d objects onto a form (not for a game really, just for learning purposes) and im not really sure how to handle this.

Maybe use DX instead?

Cheers,

Scronty

Afternoon, Scorpie.

Maybe use DX instead?

Cheers,

Scronty

Maybe use DX instead?

Cheers,

Scronty

nope, i want to do it by hand, learn as much from it as possible :)

In order to understand why 500 looks nice, and 100 does not, let's examine what this FOV thing is - it's an ANGLE which can be expressed in Degrees (an integer from 0 to 360) or more commonly in Radians (a float from 0 to 2pi).

If you are expressing it as an angle, then 90 degrees is a quarter of a circle, 100 degrees is slightly wider, but 500 degrees is really rounded by modulus 360, so its really 500-360 = 140 degrees...

You say 100 degrees looks bad, but 140 looks ok?

140 is beginning to approach 180, think about this, if the fov is 180, you are squeezing onto the screen not just whats in front of you, but also whats to the left and right of you - you have a "bugs eye" view of the world, the human fov is close to 90 (somewhat less)

Sometimes things can look weird under perspective until you have several objects at once, so you can see the difference over distance and give your eye a few "markers" in the scene to allow your brain to make the correct corrections to what the eye is seeing... I found early on that my perspective seemed wrong but it was only because I was shoving all the objects right near my viewpoint..

If you are expressing it as an angle, then 90 degrees is a quarter of a circle, 100 degrees is slightly wider, but 500 degrees is really rounded by modulus 360, so its really 500-360 = 140 degrees...

You say 100 degrees looks bad, but 140 looks ok?

140 is beginning to approach 180, think about this, if the fov is 180, you are squeezing onto the screen not just whats in front of you, but also whats to the left and right of you - you have a "bugs eye" view of the world, the human fov is close to 90 (somewhat less)

Sometimes things can look weird under perspective until you have several objects at once, so you can see the difference over distance and give your eye a few "markers" in the scene to allow your brain to make the correct corrections to what the eye is seeing... I found early on that my perspective seemed wrong but it was only because I was shoving all the objects right near my viewpoint..

ok, ill set it to 90 and ill change my code from this cube to something that can build polygons and fill em up

edit: hmm i made FOV 90 and then i made it 50, so as 50 now is (according to your explanation) the angle in which you see, the object SHOULD be bigger on the screen when narowing the angle of sight, but actually on my form the cube becomes smaller.

i can think of why it is smaller because you multiply each X and Y with FOV and if you make 1 or both opperands for a multiply smaller the result will get smaller.

why is this and how to fix it then? (the word 'this' is not refering to the part about multiply but to the first part of the edit part)

edit: hmm i made FOV 90 and then i made it 50, so as 50 now is (according to your explanation) the angle in which you see, the object SHOULD be bigger on the screen when narowing the angle of sight, but actually on my form the cube becomes smaller.

i can think of why it is smaller because you multiply each X and Y with FOV and if you make 1 or both opperands for a multiply smaller the result will get smaller.

why is this and how to fix it then? (the word 'this' is not refering to the part about multiply but to the first part of the edit part)

Firstly I'd like to say that Ultrano's formula for applying perspective is incorrect - it is simplistic. In the very earliest days of 3d, before we had a term FOV in our coding vocabulary, we'd use a formula a lot like his to devolve 3d points to 2d, where we'd simply divide X and Y by Z.

Rather than attempt to explain or address the various problems with this oversimplistic solution, I am going to do two things: firstly, point you to http://easyweb.easynet.co.uk/~mrmeanie/persp/persp.htm and secondly to tell you that there's an even better way, by creating a "Projection Transform Matrix" which describes your FOV, and then applying it to each and every vertex... this solution is much better for large numbers of vertices, teaches more about matrix transforms, avoids math issues in the formulae, and is the modern standard solution to this issue.

Have a nice day :)

Rather than attempt to explain or address the various problems with this oversimplistic solution, I am going to do two things: firstly, point you to http://easyweb.easynet.co.uk/~mrmeanie/persp/persp.htm and secondly to tell you that there's an even better way, by creating a "Projection Transform Matrix" which describes your FOV, and then applying it to each and every vertex... this solution is much better for large numbers of vertices, teaches more about matrix transforms, avoids math issues in the formulae, and is the modern standard solution to this issue.

Have a nice day :)

Interesting :) I didn't know exactly how to calculate the FOV - I was just testing with different values to see what will look best. I might move to matrices if they prove fast enough on ARM :) So far, matrices are

**really**slow there, but if they beat the 30 to 80-cycle divide, and I use the DSP extensions of newest ARM cpus, I'll use that matrix (now I've gotta find it ^^")thanks for the link, i've read some tutorials there but i still have trouble determening some value, its about the link you gave me, the have the following:

f = w / tan(a)

g = h / tan(b)

what will be good values for a and b ?

f = w / tan(a)

g = h / tan(b)

what will be good values for a and b ?

a and b are the horizontal and vertical fov - use your fov angle here.

There's two of them to allow you more control over the stretching of the image (to counteract extreme screen resolutions), you can use the same fov for a and b, or different, up to u, main thing to realize is that a and b are angles in degrees.

You don't NEED this part of the formula, it's only there as I said to give more control over the perspective, the formula just above this one is more than fine...

There's two of them to allow you more control over the stretching of the image (to counteract extreme screen resolutions), you can use the same fov for a and b, or different, up to u, main thing to realize is that a and b are angles in degrees.

You don't NEED this part of the formula, it's only there as I said to give more control over the perspective, the formula just above this one is more than fine...

@ scronty: its easier to first understand things, after that go dx, not both go understand 3d and be distracted by dx api

@Ultrano

I use rcp + mul instruction instead of div's when I testrender

what about you decide what values you will use for Z-clipping and make a reciprocal LUT only for valid Z's with 16bit accuracy and interpolate the rest?

I mean emulate SSE RCP instruction in your ARM assembly

@Ultrano

I use rcp + mul instruction instead of div's when I testrender

what about you decide what values you will use for Z-clipping and make a reciprocal LUT only for valid Z's with 16bit accuracy and interpolate the rest?

I mean emulate SSE RCP instruction in your ARM assembly

Yup :) I'm actually using such a LUT. It's funny that it's as fast as a software DIV. This latency is because of the damned small cache and slow RAM. I'll have to make lots of speed tests :| And try to use the DSP that is present as a coprocessor on newest ARM - 1 cycle instead of 15 for 16x16bit multiplication ^_^ + saturation. But currently I have to concentrate on 2D games ^^"

Next to think about is the order of drawing. In software drawing without z-buffer, the "painter's algo" is used - first you draw the furthermost polygons, and finally the nearest. And you can take it one step further - use normals (stg like the direction to which a poly is pointing) to eliminate faces that do not look at you (and thus they won't be necessary if the 3D model is solid). For example, imagine you're in front of a box and you see only the facing and top rectangle of the 6 rectangles that form the box. The other 4 faces(aka polygons) are looking away from you, and even if you drew them, they'll always be overwritten by other polygons in the same 3D model. So no sweating to draw them is necessary :D . Only if the model was semi-transparent, you'll need to draw those back faces.

To see in which faces are further (and first to draw), you have to sort the polygons by their Z-distance. This is done in two ways (I just came up with the second hehe haven't tested it) -

1) from the 3 vertices of a polygon, sum the Z-values, and divide by 3 - this way you get the average Z value, and you'll sort using this value

2) from the 3 vertices get the Z-value with highest Z value, use it in sorting

use quicksort :) in Win32, the "qsort" API is available.

btw, realvampire, your app runs really slow on my AthlonXP2000+ with 512DDR (400MHz) and Radeon R9200 ^^" .

To see in which faces are further (and first to draw), you have to sort the polygons by their Z-distance. This is done in two ways (I just came up with the second hehe haven't tested it) -

1) from the 3 vertices of a polygon, sum the Z-values, and divide by 3 - this way you get the average Z value, and you'll sort using this value

2) from the 3 vertices get the Z-value with highest Z value, use it in sorting

use quicksort :) in Win32, the "qsort" API is available.

btw, realvampire, your app runs really slow on my AthlonXP2000+ with 512DDR (400MHz) and Radeon R9200 ^^" .

btw, realvampire, your app runs really slow on my AthlonXP2000+ with 512DDR (400MHz) and Radeon R9200 ^^" .

Yes you are right. That was because I use timer to refresh the screen. Do you have another way to make it fast?

I'd use ddraw - very easy to set up, and a simple loop with invoke Sleep,20 on the end, together with getasynckey to check on escape and other keys :) . and not set thread priority

I'd use ddraw - very easy to set up, and a simple loop with invoke Sleep,20 on the end, together with getasynckey to check on escape and other keys :) . and not set thread priority

Do you have the link to set it up? my last try to set it up bring me a great confusion and I almost give up.