First i want to say big thanks for the work you put down writing these tutorials, i really appreciate it.
Its a big shame tho that its so few that visits this forum.


now to my question, in demo #7, something goes wrong for me
line 48
.if $inv(glGenTextures,1, addr texid)==0

glGenTextures returns 1 for me, and triggers the msgbox saying error

this is my first time ever i deal with opengl, or any graphics library for that part, but i tried to see if i could solve this on my own.
checking the documentation on opengl.com (http://www.opengl.org/sdk/docs/man/xhtml/glGenTextures.xml) it says the return value of glGenTextures is void.
so i figured since you set texid to 0 before the call, that maybe everything is as it should if we instead check texid so it contains a pointer after the call
invoke glGenTextures,1, addr texid
.if texid!=0

like that, but it just draws the shapes without any texture at all

so what do i do wrong? ^^

also, when checking out the opengl apis, is it http://www.opengl.org/sdk/docs/man/ thats recommended to use?
Posted on 2010-03-19 08:16:30 by Azura
Hi, Azura :)
This can happen if you tried to load a texture, but have not yet set a Render Context.. make sure the window is created and the render context is set before trying to load a texture.
In fact, you will find that there are two possible error codes that can be returned by glGenTextures - but I am not sure that this behavior (when we dont have a RC yet) is defined, or this error code is defined - but I did fall for this trap originally, and a little detective work showed me that I was not alone.
Just know that we do expect zero, and in this case, we will have a nonzero texture id returned to the array we specified (in our case, an array of just one element).

Complete documentation of the specification can be found at http://www.opengl.org/documentation/

Attached is demo 8.
This time, I have modified the texture loader to use OA32's Pixelmap class.
Although this is one seriously powerful and cool class, I'm only using it here to help load various image file formats.
Pixelmap uses IPicture to load image files to a device-independant bitmap (internally) - we have access to the bitmap information, including the raw pixel data. It supports BMP, JPEG, TGA and others.
This example comes with a JPEG (1024x1024) which is being used to texture our tetrahedron.
You'll notice that textures are now loaded and released by a pair of new bookend functions, and that we now unload and reload textures during screen mode switches.

In the next example, we'll learn about mipmaps and texture filtering, which can improve the way a texture appears when it is scaled up or down outside of its usual size (say, if we are very close or very far from something)...ie, what opengl will do if it needs to 'invent' more texture data than exists in the image, or needs to 'drop' some texture data.



Attachments:
Posted on 2010-03-19 09:10:37 by Homer
In the OpenGL_Demo8.rar you included a binary and no source.



I tried to look deeper into my problem but still cant find out whats wrong.
When running your OpenGL_Demo7 code unmodified, or when running the binary you included as OpenGL_Demo8, thats when i get the error "Failed glGenTextures".

If I do the following, checking every of the functions so they do their job:

RegisterClass
CreateWindowEx
GetDC

ChoosePixelFormat
SetPixelFormat

wglCreateContext
wglMakeCurrent

glGenTextures

then glGenTextures still fails like before

edit: calling glGetError after glGenTextures returns 0

edit 2: i think the opengl version that is being used is 1.4
if that is to any help
Posted on 2010-03-19 16:12:45 by Azura
Whoops!
Attached is source for demo8, did not bother reattaching the jpeg.

Your glGenTextures problem sounds like it could be driver-specific... even though the return value is declared as void in older documentation, that same documentation clearly mentions two possible error codes - so it can't be void, can it? :P

Instead of checking the return value of the api, check whether the texid changed from zero.
If it did, consider it successful.
Try that, and let me know what happens.
If that fails we'll investigate this further, as that api function should never fail, and is absolutely required for texturing.

You'll see in demo8 that WinMain now calls 'ReloadTextures' - just after checking the returnvalue from calling 'CreateGLWindow'... but essentially the same idea, so won't immediately solve your problem.
Did the demo8 binary actually run ok for you?

PS I added a line to find out the opengl version at runtime and throw a string to debug.
Like glGenTextures, it requires a valid Render Context.
Apparently I'm using OpenGL v3.2.0
This probably explains whats going on with the differences in return values / documentation of the api.
Chances are good that your code is actually working fine - but I would still suggest to upgrade your opengl driver at your videocard's vendor site - and I am seriously considering throwing an error for early versions of OpenGL drivers and telling the user to go update their stuff... 1.4 is quite out of date now, version 4 is in beta.


Attachments:
Posted on 2010-03-19 19:40:51 by Homer
Demo 9 : Texture Filtering

If you look closely at the Texture Loader, you will notice these two lines:

invoke glTexParameteri,GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR; // Linear Min Filter
invoke glTexParameteri,GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR; // Linear Mag Filter


Here, we are selecting the 'minify and magnify filters'.
These tell OpenGL what to do if there is too much texture data, or not enough, to cover a surface on the screen.
This is especially obvious when objects are very close, or very far away.
Of the six possible filter options, the last four all require something called 'mipmaps'.
Let's describe quickly what mipmaps are, and then look at the possible filtering options.

Mipmaps are copies of a texture with different levels of quality.
OpenGL can choose which 'level of density' (mipmap texture) to use based on the pixel/texel ratio, which can drastically improve the visual quality of the rendered scene (by eliminating artefacts such as moire patterns, and providing antialiasing of edges).
We can tell OpenGL to generate mipmaps for a given texture by adding one more api call to our texture loader, as can be seen in the attached sourcecode.


MinFilter can use any of the six possible options, whereas MagFilter can only use the first two of these.
Note that the remaining four options all require 'mipmaps'.


GL_NEAREST
   Returns the value of the texture element that is nearest (in Manhattan distance) to the center of the pixel being textured.
GL_LINEAR
   Returns the weighted average of the four texture elements that are closest to the center of the pixel being textured. These can include border texture elements, depending on the values of GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T, and on the exact mapping.
GL_NEAREST_MIPMAP_NEAREST
   Chooses the mipmap that most closely matches the size of the pixel being textured and uses the GL_NEAREST criterion (the texture element nearest to the center of the pixel) to produce a texture value.
GL_LINEAR_MIPMAP_NEAREST
   Chooses the mipmap that most closely matches the size of the pixel being textured and uses the GL_LINEAR criterion (a weighted average of the four texture elements that are closest to the center of the pixel) to produce a texture value.
GL_NEAREST_MIPMAP_LINEAR
   Chooses the two mipmaps that most closely match the size of the pixel being textured and uses the GL_NEAREST criterion (the texture element nearest to the center of the pixel) to produce a texture value from each mipmap. The final texture value is a weighted average of those two values.
GL_LINEAR_MIPMAP_LINEAR
   Chooses the two mipmaps that most closely match the size of the pixel being textured and uses the GL_LINEAR criterion (a weighted average of the four texture elements that are closest to the center of the pixel) to produce a texture value from each mipmap. The final texture value is a weighted average of those two values.



The attached demo allows you to switch between three sets of min/mag filters by pressing and releasing the F key.
I suggest you run this in a large window, or fullscreen, or you may not notice the difference, as there is still no way to move closer to our tetrahedron.

Now some notes about the implementation.
Since changing the filter is a texture function, it only affects the texture to which we are currently bound.
In a demo with multiple textures, our rendering function would be binding to each texture before using it.
If we're going to change the filter in that scenario, it has to happen after each glBindTexture call, so we should not be doing this from some random thread like WndProc, and we should only be doing it if the filter actually changed, not just calling it naively (though to be honest I don't think theres a big penalty if we do that).
In this demo, I only call the BindTexture once, so I can actually safely set the filters any time I like.
However I have given example code for how to use a Global variable as a flag to communicate a filter change, which should be monitored from the render function. Of course, we don't normally go changing filters arbitrarily.
Most sourcecode for this topic actually generates three textures, and sets up the filters for each, then just switches textures at runtime - I wanted to show that we don't need three copies of the same texture to achieve three filter effects.

Next, we will begin looking at Lighting, Normals and Materials :)

Posted on 2010-03-19 21:38:03 by Homer
Demo10 - Basic Lighting and Surface Normals

Once we introduce Lighting, our 3D scene really comes alive. It makes a massive difference.

Lights have several attributes:
Ambient - this is  the color which a light contributes to the entire scene. You typically only want one ambient light in your scene, as it dictates how things look in the ABSENCE of light, if that makes sense.
Diffuse - this is the color which a light casts apon objects in the scene, ie the color of REFLECTED light, more on this later.
Specular - this is the color of the light reflected in the specular highlight we see apon shiny surfaces.
Cone Radius - This is used to make 'spotlights' with a cone of given angle (r15 = a 30 degree cone).
Cone Direction - For Spotlights, the direction the Cone is facing (ie which way is the spotlight facing)
Linear Attenuation - How light diminishes over distance
Angular Attenuation - For Spotlights only, how light diminishes from the center of the cone to the outer radius.

We won't use all of these at first, we'll introduce them in stages.

The other thing which affects the way objects look under lighting is called Material.
Materials describe the way light is reflected off them, which in the real world, would depend what they are 'made of'.
We can for example describe how 'shiny' something is.
We'll look at point lights, then spotlights, then we'll play with Materials.

My next post will contain a very simple Lighting demo, with 'Surface Normals' and a single Point light.
Although Surface Normals look ok for objects with big flat surfaces, they are not very good for 'rounded/smooth' surfaces, so I'll follow that up with an example that uses 'Vertex Normals' (lighting is smoothed across surfaces).
We'll learn how to calculate Surface Normals using crossproducts, and subsequently we'll learn how to convert SurfaceNormals into Vertex Normals.
In fact, for any convex polygon like our tetrahedron and cube which are centered on their Origin, we can actually cheat and calculate all this stuff far more quickly, but lets learn how to do it for ANY shape ;)

When we finally look at vertex and 'fragment' (pixel) shaders, we'll have the opportunity to take full control of lighting, and much of these early examples will seem redundant.
But one step at a time, yes?

Posted on 2010-03-19 23:15:04 by Homer

Mipmaps are copies of a texture with different levels of quality.


I'd like to add that mipmaps are scaled-down versions of the original texture. Usually they are a sequence where you cut the resolution in half at each step.
Eg, if you have a 512x512 texture, you will get the following set of mipmaps:
512x512 (the original)
256x256
128x128
64x64
32x32
16x16
8x8
4x4
2x2
1x1

The key to these scaled-down textures is that a filter is applied during the downscaling process. So they are pre-filtered textures. This allows for efficient filtering in realtime.
Posted on 2010-03-20 04:18:34 by Scali

So.. Steps to creating our first light:


    .data
Color4 struct
r real4 ?
g real4 ?
b real4 ?
a real4 ?
Color4 ends

AmbientLight Color4 <0.5f,0.5f,0.5f,1.0f> ;Ambient white light, of mid strength
DiffuseLight Color4 <1.0f,1.0f,1.0f,1.0f> ;Full intensity white light
SpecularLight Color4 <1.0f,1.0f,1.0f,0.0f>  ;Specular highlight
PositionLight Color4  <0.0f,0.0f,-2.0f,1.0f> ;Position light just behind the camera, facing outwards

.code
;Initialize Lighting
    invoke glLightfv,GL_LIGHT1, GL_AMBIENT, addr AmbientLight
    invoke glLightfv,GL_LIGHT1, GL_DIFFUSE, addr DiffuseLight
    invoke glLightfv,GL_LIGHT1, GL_SPECULAR,addr SpecularLight
    invoke glLightfv,GL_LIGHT1, GL_POSITION, addr PositionLight
    invoke glEnable,GL_LIGHT1         
    invoke glEnable,GL_LIGHTING


OpenGL supports a fixed number of lights - we are setting properties for Light #1.
We then ENABLE Light #1.
Finally, we ENABLE LIGHTING (there is NO lighting without this final step).

That's enough to get lighting to work, but it won't look at all right unless we add some Normals to our geometry (in our render function).

So - what is a Normal? Well, its a vector which tells us which Direction something is facing.
In Demo10, we will look now at what Surface Normals are all about.
And in Demo11, we will learn about 'per-vertex' Normals.

A surface normal is an imaginary arrow which points outwards from a surface.
For a triangle, or any convex polygon (of a given winding order), we can calculate the surface normal from any two consecutive Edges using a CrossProduct.

In 3D vector math, aside from vector addition and subtraction, the most common operations are called DotProduct and CrossProduct. I won't elaborate here on these math operations too much, I'll just point out that if we take the CrossProduct of two Vectors, we get a third Vector which is orthogonal to both the others - which in fact is the definition of a Surface Normal - literally, it will tell us which way the 3D surface is facing. And our Lighting needs that.

Depending on our 'prior knowledge' of the geometry, there are certainly alternative ways to determine a SurfaceNormal, but this way is a good general solution.

The DotProduct can be calculated as follows:
Given two input vectors v1 and v2,
find  v = v1 x v2
vx = v1y * v2z - v1z * v2y
vy = v1z * v2x - v1x * v2z
vz = v1x * v2y - v1y * v2x

We can write a small macro or function to perform that task quickly.
Now all we need to do is calculate the SurfaceNormal for each Face of our tetrahedron, and apply them in our render function.
Code to follow.


Posted on 2010-03-20 06:10:55 by Homer

Your glGenTextures problem sounds like it could be driver-specific... even though the return value is declared as void in older documentation, that same documentation clearly mentions two possible error codes - so it can't be void, can it? :P

Instead of checking the return value of the api, check whether the texid changed from zero.
If it did, consider it successful.

It does change.
After looking into it more, it seems it is like you say it is, successful since texid changes (glGetError also reports no error). Everytime i call glGenTextures it return the amount of textures generated, which i guess is just specific to my system (which actually gives it a kinda bad behavior imo, calling it with for example n=some known error code, will make it return that error no matter if its succesfull or not)


Try that, and let me know what happens.
If that fails we'll investigate this further, as that api function should never fail, and is absolutely required for texturing.

You'll see in demo8 that WinMain now calls 'ReloadTextures' - just after checking the returnvalue from calling 'CreateGLWindow'... but essentially the same idea, so won't immediately solve your problem.
Did the demo8 binary actually run ok for you?

I have a couple of different results:

The demo8 binary you attached, when running it it first says "failed glGenTextures" then after pressing ok, it continues and draws the shape but completely white (understandable since after it says failed it doesnt finish the rest of LoadTexture)

Modifying the demo7 code to allow glGenTextures if texid is changed, results in white shapes, which it shouldnt

Modifying the demo8 code to allow glGenTextures if texid is changed, results in success

After modifying I cant see why demo7 wont work when demo8 does, but it works atleast and i dont know if its worth the time to try track down whats actually failing in demo7 for me


PS I added a line to find out the opengl version at runtime and throw a string to debug.
Like glGenTextures, it requires a valid Render Context.
Apparently I'm using OpenGL v3.2.0
This probably explains whats going on with the differences in return values / documentation of the api.
Chances are good that your code is actually working fine - but I would still suggest to upgrade your opengl driver at your videocard's vendor site - and I am seriously considering throwing an error for early versions of OpenGL drivers and telling the user to go update their stuff... 1.4 is quite out of date now, version 4 is in beta.

Not much i can do about it :(
The driver i have is the latest one, dating feb 2008
and after running the demo8 code i can confirm its version 1.4.0

I have only tested the things up to demo8 so far but since my opengl version is kinda old, i guess there will be parts later on which i cant do. Is there anything to do about it except getting new hardware?



edit: source for demo10 is missing?
Posted on 2010-03-20 07:29:02 by Azura
Everytime i call glGenTextures it return the amount of textures generated


Actually, it returns the first unused TextureName - these are simply indices, starting at 1 (0 is reserved).
If you loaded a few textures, the number would increment.
There is no guarantee the numbers will be always in order, as you might (?) release them out of order, leaving unused TextureNames. I refer to OpenGL TextureNames as TextureIDs, since thats what they really are.



After modifying I cant see why demo7 wont work when demo8 does, but it works atleast and i dont know if its worth the time to try track down whats actually failing in demo7 for me


There was a small bug in the TextureLoader logic which was fixed by me silently.
One of the error cases, I can't remember, but I fixed it.


since my opengl version is kinda old, i guess there will be parts later on which i cant do.
Is there anything to do about it except getting new hardware?

Well, it's not going to matter for quite a while, but it's still not the end of the world.. it is generally possible to emulate most advanced stuff on low-end hardware via different (slower) techniques - we can make our programs scale to the hardware they are running on, by choosing (at runtime from several possible techniques) the best one that will run on this hardware. And we can also give the user the ability to bias this decision-making (turn down features) in order to improve performance on slower machines.

It should not be expected that everyone has the latest, greatest hardware and drivers - we should be able to code around this fact of life.

I will at some time soon repost all the demos with some corrections and so on.
But I do want to press on with the series.
Anyway, I'm glad demo8 worked ok for you :)
And I will change the TextureLoader logic to work on both new and older OpenGL drivers, thanks to your informative response :)

edit: Oh - Source for Demo10 will be posted as soon as I've finished describing what makes it tick.


Posted on 2010-03-20 08:06:41 by Homer
OK, I'm gonna let the code speak for itself.
If anyone wants to ask me what a SurfaceNormal is, then I will explain it.
The source contains some new macros for performing operations with Vectors of arbitrary precision.
It also contains a small function for calculating the SurfaceNormal from three consecutive Points of a Face - to Vec3 precision, which is what we need for Normals in OpenGL.

The next demo will show how to calculate a Normal for each and every Vertex in our tetrahedron, so we can have nicer lighting.

Attachments:
Posted on 2010-03-20 08:36:51 by Homer
Demo11: Simple Lighting with Vertex Normals

We have the SurfaceNormals.
Now we want to find a Normal for each VERTEX instead.
How do we do that?

For each Vertex in our shape, we will find the Average of the Sum of all the Faces which share that Vertex.

But this time I'm going to cheat, based on my prior knowledge of the shape.
Since the shape is a regular Convex Solid which is centered apon its Origin, the vertex normals are simply the vertices themselves - that is to say, if we want to know the direction of the arrow from the origin to each vertex, it is quite simply the value of each vertex point.

Posted on 2010-03-20 08:59:57 by Homer

It should not be expected that everyone has the latest, greatest hardware and drivers - we should be able to code around this fact of life.


Especially with OpenGL there can be quite a few problems in this area.

Firstly, do not expect the latest drivers for given hardware to also support the latest OpenGL specifications. For example, on Mac OS X, you still only get OpenGL 2.1, which dates from 2006. On Windows, Intel only supports OpenGL 2.0 (from 2004), even on their latest DX10 hardware.

Secondly, each driver has its own shader compiler built in, which may have slightly different behaviour (also note that GLSL syntax is not always fully backwards compatible with older versions). So having shaders that compile on one vendor's drivers is no guarantee that they work on another vendor's drivers aswell, even when they both support the same OpenGL/GLSL version.

Lastly, the behaviour of OpenGL itself is not fully backwards compatible... Some functionality is scrapped altogether in newer versions... in other cases, new extensions may alter the behaviour of existing APIs. A good example of that is the use of vertex arrays. When you use vertex buffer objects, you no longer pass pointers, but rather element offsets to gl*Pointer() functions. With older versions of OpenGL, binding a VBO with name 0 will disable vertexbuffers, and restore the legacy functionality. With newer versions, binding VBO with name 0 is an invalid operation.

Things can get pretty hairy if you try to write OpenGL code that actually works on machines other than your own.
Posted on 2010-03-20 09:54:30 by Scali

Here is the sourcecode for Demo11.
It's a simple modification of Demo10.
All I've done is change the render function to use a Normal for each vertex - and as mentioned, in my special case of a tetrahedron centered on its origin, I was able to 'know' the VertexNormal for each vertex in advance.

Just a few quick words about Directions and stuff.

In openGL, you have a coordinate system where you are looking in to the XY Plane, along the MINUS Z axis.
You'll note that to see our tetrahedron, we are translating it to MINUS 6 in the Z axis.
So Negative-Z means 'into the screen'.
When we set the Position of our Light, you'll notice that we put it at PLUS 4 in the Z axis.
This implies that the Light is somewhere just behind the viewer, thus lighting the Scene in front of it.
We could have just left it at the Origin (where our camera is), but it would be a little close to the subject, and we wouldn't be able to see the light 'fall off' over distance.
This is called 'attenuation'.
OpenGL performs attentuation automatically, however it is possible to manipulate the attenuation in a few ways.
At the moment, our light is actually Directional, and just happens to be pointing in -Z by default.
We can make a spotlight by setting a cone radius, and we can set the Direction.
It is good to imagine a default light as being a cone light of 180 degrees spread!!

In the next demo, we'll turn our Light into a spotlight.
Attachments:
Posted on 2010-03-20 19:27:48 by Homer
My Demo10 and Demo11 is showing different lightnings (atleast different strength of light, the directions is hard to tell), shouldnt they be more or less the same?

Also since its normals, shouldnt they be normalized (dont know if thats the word used in english, but have length of 1)?
I mean thats how a normal is in math, in opengl or graphics i have no idea but i guess it should be the same?
Posted on 2010-03-26 08:34:14 by Azura
Not sure how i should explain this but ill give it a try.

Before i saw the results of the demos that have lightning included (specially demo11), i thought the result would look like before but the parts which had light on them would be lighter.
Now looking on how the result really become i also notice that parts which doesnt have much light on them becomes darker then before, so adding lightning also adds some kind of "darkness", is there any way to change how light/dark this darkness should be?
Posted on 2010-03-26 08:39:56 by Azura
#1 - OpenGL automatically normalizes its Normals, it just needs a Direction :)
#2 - We can modify the 'attenuation' of a Spotlight - how fast the light fades and how big the cone of the spotlight is etc.

I didn't pick a very good example shape to show lighting effects, because the angle between any two faces is quite severe in our example - this tends to exaggerate lighting effects just as it does in real life.
Posted on 2010-03-28 07:01:21 by Homer
This is an amazing set of tutorials even though texturing does not work on my nvidia :( I'm sure I can sort that out if you continue
Posted on 2010-08-10 14:26:17 by danielrhodea

How about some requests?
Posted on 2010-08-11 00:28:20 by Homer
Well, one request that I'd like to put in:
Could you include a working binary with every sample you release?
That way it takes some uncertainty out of the equation. You know that the binary is built correctly, so that's what it SHOULD be doing.
If you set up the build environment yourself, it may not build exactly as intended.

In fact, I'm lazy myself, and can't really be bothered to build the samples from source code, because I'd have to set up a build environment for it specifically. I'd just like to read through the sourcecode and run the binary to see what it does.

I'd like to say the same for your physics stuff. It's interesting to read through the various posts, but it would be nice to have some 'hands on' stuff as well. Some simple binary to play with, and to see how it works in practical situations. It might make things 'come alive' to people more.
Posted on 2010-08-16 04:39:08 by Scali