Everything is going well so far.

I've redesigned my TerrainPatch object to recursively construct a full-depth QuadTree (of indexed trifans or whatever other geometry we choose to spew), calculate the error metrics at the leaves, and propagate the error metrics back through the tree as we return from the recursion.

By full-depth, I mean that recursion only terminates when the (pixelspace) distance between two heightmap sample points (of a quad) is less than one pixel (on BOTH AXES)... ie, we ran out of sample resolution, if we 'zoom in' any further, we will not see any new data, just interpolation of our data as taken at full resolution.

Problem with D3DXComputeNormalMap.

It generates the Normals yes, and we already know that they are encoded as three Signed Bytes (-127 - +128) but they are defined in TEXTURE SPACE (sometimes called Tangent Space).

I need to convert them back into WorldSpace, or else perform all Normal calculations in Tangent Space.

One solution requires a transform matrix, the other requires the inverse of that matrix.

Until I figure out how to make that matrix, I'm screwed.

http://www.gamasutra.com/view/feature/1515/messing_with_tangent_space.php?page=2

I've redesigned my TerrainPatch object to recursively construct a full-depth QuadTree (of indexed trifans or whatever other geometry we choose to spew), calculate the error metrics at the leaves, and propagate the error metrics back through the tree as we return from the recursion.

By full-depth, I mean that recursion only terminates when the (pixelspace) distance between two heightmap sample points (of a quad) is less than one pixel (on BOTH AXES)... ie, we ran out of sample resolution, if we 'zoom in' any further, we will not see any new data, just interpolation of our data as taken at full resolution.

Problem with D3DXComputeNormalMap.

It generates the Normals yes, and we already know that they are encoded as three Signed Bytes (-127 - +128) but they are defined in TEXTURE SPACE (sometimes called Tangent Space).

I need to convert them back into WorldSpace, or else perform all Normal calculations in Tangent Space.

One solution requires a transform matrix, the other requires the inverse of that matrix.

Until I figure out how to make that matrix, I'm screwed.

http://www.gamasutra.com/view/feature/1515/messing_with_tangent_space.php?page=2

It looks prohibitively expensive to convert UVW to XYZ !!!

I mean really REALLy expensive.

Several times what it would cost to use the old trusted method (calculate surfacenormals then average those face-shared values to find the vertexnormals).

Alternatively, I could come up with my own solution.

I could, for examle, do this:

For each heightmap sample point, calculate the sum of the vectors from that point to four (or eight) neighbour sample points (wrapping as needed).

If we can calculate the 'unit stepovers' required to step along the resulting vector, then we have our WorldSpace VertexNormal.

Or is there an easy and quick way to convert those pesky normalmap values?

I mean really REALLy expensive.

Several times what it would cost to use the old trusted method (calculate surfacenormals then average those face-shared values to find the vertexnormals).

Alternatively, I could come up with my own solution.

I could, for examle, do this:

For each heightmap sample point, calculate the sum of the vectors from that point to four (or eight) neighbour sample points (wrapping as needed).

If we can calculate the 'unit stepovers' required to step along the resulting vector, then we have our WorldSpace VertexNormal.

Or is there an easy and quick way to convert those pesky normalmap values?

I haven't played with texture matrices, but isn't it possible to suply the same matrix for boththe world/view tranfsformation (so that vrtices get transformed) and texture transformation (so the normal maps get transformed) ?

No.

Lets be clear.

We have many SPACES which are different Coordinate Systems.

Texture Space... "face space".

Object Space... obvious

View Space... "camera space with depth of view, orientation, and position"

Projection Space... "camera space in bindpose, with depth of view"

World Space... "absolute space"

And theres others I have used, such as Barycentric Space... but back to the point.

When we talk about moving stuff from one Space to another, we're talking about transforming it from one Coordinate System to another... usually using a Matrix.

The combined World, Projection and View matrices will get you into Camera space... see the above list.

Transformations are non commutative, so we cant just use world*view, or else we have no depth of view.. our camera lens is flat and has no focus.

We need to include projection matrix.

But anyway, that just gets us to Camera space, when we still need to get into 'face space' or 'face plane space' if that makes it sound nicer.

We are at least two matrices away from our goal, and it gets worse.

We need to find one matrix to calculate the face normal, and then split it into separate matrices for each vertex on that face !!!

Other solutions to this involve finding the surfacenormal of the face using two edges then doing some other junk... Sigh!

Look , Texture Space is great for Pixel Shaders and such, but its not a whole lot of use to us cpu coders unless we move all our other stuff into texture space as well.

That requires a matrix.

Moving stuff from worldspace to texturespace or back requires a compound matrix which in one MatrixMultiply uses a different matrix per vertex per face.

Ouch.

This geometry is regular and we can find neighbours easily at any LOD.

I think its cheaper just to calculate all the darned facenormals and then find the vertexnormals in the usual way.

What we need to do:

At the leaves of our tree, discern triangles.

Calculate their facenormals.

Calculate each vertex normal = the average of the the normals of the faces it shares. For a regular geometry, its not hard.

We only need to do it at the leaves because we only want the normals for lowest-level calculation of error metric, to be handed back up the tree.

We do NOT need to store Planes inside our Nodes of our Tree at any place.

It might be nice, but not needed.

Lets be clear.

We have many SPACES which are different Coordinate Systems.

Texture Space... "face space".

Object Space... obvious

View Space... "camera space with depth of view, orientation, and position"

Projection Space... "camera space in bindpose, with depth of view"

World Space... "absolute space"

And theres others I have used, such as Barycentric Space... but back to the point.

When we talk about moving stuff from one Space to another, we're talking about transforming it from one Coordinate System to another... usually using a Matrix.

The combined World, Projection and View matrices will get you into Camera space... see the above list.

Transformations are non commutative, so we cant just use world*view, or else we have no depth of view.. our camera lens is flat and has no focus.

We need to include projection matrix.

But anyway, that just gets us to Camera space, when we still need to get into 'face space' or 'face plane space' if that makes it sound nicer.

We are at least two matrices away from our goal, and it gets worse.

We need to find one matrix to calculate the face normal, and then split it into separate matrices for each vertex on that face !!!

Other solutions to this involve finding the surfacenormal of the face using two edges then doing some other junk... Sigh!

Look , Texture Space is great for Pixel Shaders and such, but its not a whole lot of use to us cpu coders unless we move all our other stuff into texture space as well.

That requires a matrix.

Moving stuff from worldspace to texturespace or back requires a compound matrix which in one MatrixMultiply uses a different matrix per vertex per face.

Ouch.

This geometry is regular and we can find neighbours easily at any LOD.

I think its cheaper just to calculate all the darned facenormals and then find the vertexnormals in the usual way.

What we need to do:

At the leaves of our tree, discern triangles.

Calculate their facenormals.

Calculate each vertex normal = the average of the the normals of the faces it shares. For a regular geometry, its not hard.

We only need to do it at the leaves because we only want the normals for lowest-level calculation of error metric, to be handed back up the tree.

We do NOT need to store Planes inside our Nodes of our Tree at any place.

It might be nice, but not needed.

I was almost right about the matrix-per-vertex-per-face claim.

It turns out that since all three vertices of a Triangle are coplanar, they can use the same matrix.

So for Triangles, we need one matrix per plane.

But for more complex geometries, even just quads, we need to calculate one matrix per vertex, since we cant be sure that the vertices are coplanar in worldspace.

So without further ado, here's what it costs us to build one such matrix:

2 Additions

27 Subtractions

51 Multiplications

1 Division

==

81 unavoidable operations

Like I said, it costs a whole lot more than finding vertexnormals from facenormals does.

Now, thats JUST to calculate a matrix, not to Apply it to transform vertices.

The cost to find the Surface Normal of a Triangle is:

2 adds

9 subs

9 muls

3 divs

1 sqrt

=====

24 unavoidable operations

To find vertex normals, we need to perform another 9 adds and 3 divides per vertex = 36 more operations per triangle..

====

60 unavoidable operations per triangle, in total.

It turns out that since all three vertices of a Triangle are coplanar, they can use the same matrix.

So for Triangles, we need one matrix per plane.

But for more complex geometries, even just quads, we need to calculate one matrix per vertex, since we cant be sure that the vertices are coplanar in worldspace.

So without further ado, here's what it costs us to build one such matrix:

2 Additions

27 Subtractions

51 Multiplications

1 Division

==

81 unavoidable operations

Like I said, it costs a whole lot more than finding vertexnormals from facenormals does.

Now, thats JUST to calculate a matrix, not to Apply it to transform vertices.

The cost to find the Surface Normal of a Triangle is:

2 adds

9 subs

9 muls

3 divs

1 sqrt

=====

24 unavoidable operations

To find vertex normals, we need to perform another 9 adds and 3 divides per vertex = 36 more operations per triangle..

====

60 unavoidable operations per triangle, in total.

Now that I am thinking about calculating accurate worldspace normals based on surfacenormals taken at the unit-triangle LOD, my recursive tree builder no longer works for me.

It becomes a chore to access the data for geometry for neighbours at the same LOD in different tree branches.

It makes sense to rewrite that code once more.

This time, we start at the Leaves of the tree, by sampling the heightmap at the unit level, and build the tree in reverse.

Now, we can access all of the data at the deepest level within the same iteration of the function :)

It becomes a chore to access the data for geometry for neighbours at the same LOD in different tree branches.

It makes sense to rewrite that code once more.

This time, we start at the Leaves of the tree, by sampling the heightmap at the unit level, and build the tree in reverse.

Now, we can access all of the data at the deepest level within the same iteration of the function :)

I've decided to leave my code alone and deal with the calculation of my normalmap separately.

My normalmap starts out full of zeroes...

As soon as the Heightmap is loaded, I will process heightmap pixels in a loop..

At each step, four pixels will be considered.

From these, a theoretical Quad of two triangles is imagined.

For each triangle, the surfacenormal is calculated, we then add that normal value to whats stored in the same three array positions IN THE NORMALMAP... yes, same normal, stored in all three places.

We do the same for the other triangle as well.

Now we flip the split line on our imaginary quad, and repeat those steps... doing this we improve our result by a factor of four - its as if we have used all 4 triangles of a trifan :)

Now we're done with this Quad, we increment our pixel-stepping loop, lather, rinse and repeat.

Finally, when we've run out of fresh pixels, our NormalMap contains vectors which are the Sum of the SurfaceNormals of all Faces sharing each Vertex... they are vector sums, we need to Normalize them.

Having done that, we have a vast array of farm fresh terrain vertex normals at maximum resolution - and it doesn't affect the existing code :)

I did look into gpu-accelerated terrain deformation schemes such as parallax occlusion mapping, but I cant see how any of them work with collision detection so I am giving them a miss for now.

It would have been nice if Microsoft had included WORLDSPACE in the flags for D3DXComputeNormalMap.

Anyhow, time to put my coding hat on and write the new NormalMap generator function.

My normalmap starts out full of zeroes...

As soon as the Heightmap is loaded, I will process heightmap pixels in a loop..

At each step, four pixels will be considered.

From these, a theoretical Quad of two triangles is imagined.

For each triangle, the surfacenormal is calculated, we then add that normal value to whats stored in the same three array positions IN THE NORMALMAP... yes, same normal, stored in all three places.

We do the same for the other triangle as well.

Now we flip the split line on our imaginary quad, and repeat those steps... doing this we improve our result by a factor of four - its as if we have used all 4 triangles of a trifan :)

Now we're done with this Quad, we increment our pixel-stepping loop, lather, rinse and repeat.

Finally, when we've run out of fresh pixels, our NormalMap contains vectors which are the Sum of the SurfaceNormals of all Faces sharing each Vertex... they are vector sums, we need to Normalize them.

Having done that, we have a vast array of farm fresh terrain vertex normals at maximum resolution - and it doesn't affect the existing code :)

I did look into gpu-accelerated terrain deformation schemes such as parallax occlusion mapping, but I cant see how any of them work with collision detection so I am giving them a miss for now.

It would have been nice if Microsoft had included WORLDSPACE in the flags for D3DXComputeNormalMap.

Anyhow, time to put my coding hat on and write the new NormalMap generator function.

It's been a few days since I last posted.

I've implemented the idea I mentioned last time...

Here is my code for calculating a very accurate map of vertex normals for a heightmapped surface.

The algorithm I've used reduces the potential number of pixel lookups substantially, eliminating much oversampling and resulting in a nice fat speedup... sure, theres more room to optimize this code, my point is that optimizing the ALGO is most often more worthwhile than optimizing the code.

The resolution of the heightmap is arbitrary, and defines the bounds of the array of normals produced.... that is to say, a Normal is produced for each Pixel of heightmap information.

I have not included a function called SurfaceNormal which returns a surfacenormal for a triangle given as three points in 3D space... however I have posted that in the past.

This code is unproven, but I have fed it various test data and looked at the normals it produces, and they seem sane at first glance.

Also, I've tried with and without the 'divide by n', and as I suspected, it made no difference, since Normalizing a vector-sum is eqiivalent to Averaging those input normals :)

I've implemented the idea I mentioned last time...

Here is my code for calculating a very accurate map of vertex normals for a heightmapped surface.

The algorithm I've used reduces the potential number of pixel lookups substantially, eliminating much oversampling and resulting in a nice fat speedup... sure, theres more room to optimize this code, my point is that optimizing the ALGO is most often more worthwhile than optimizing the code.

The resolution of the heightmap is arbitrary, and defines the bounds of the array of normals produced.... that is to say, a Normal is produced for each Pixel of heightmap information.

I have not included a function called SurfaceNormal which returns a surfacenormal for a triangle given as three points in 3D space... however I have posted that in the past.

This code is unproven, but I have fed it various test data and looked at the normals it produces, and they seem sane at first glance.

Also, I've tried with and without the 'divide by n', and as I suspected, it made no difference, since Normalizing a vector-sum is eqiivalent to Averaging those input normals :)

Allocate and calculate an array of worldspace Normals for the theoretical

;vertices resting on the terrain surface based on the heightmap at max. resolution:

;Forge theoretical triangles between groups of 4 heightmap texels, and produce

;several surface normals, and Sum those normals for each participating heightmap texel.

;NOTE:

;This proc assumes that your HeightMap is loaded into a DXPixelmap object.

;Input Params:

;WorldSize is the axial UNSIGNED size of the entire world, as a 32bit float.

GenerateNormalMap proc uses esi edi ebx WorldSize:real4

LOCAL xEnd,yEnd ;Loop control variables

LOCAL xCur,yCur

LOCAL vex1:D3DXVECTOR3 ;Four theoretical vertex positions on the terrain surface

LOCAL vex2:D3DXVECTOR3

LOCAL vex3:D3DXVECTOR3

LOCAL vex4:D3DXVECTOR3

LOCAL vOut1:D3DXVECTOR3 ;Four theoretical surface normals from four theoretical triangles

LOCAL vOut2:D3DXVECTOR3

LOCAL vOut3:D3DXVECTOR3

LOCAL vOut4:D3DXVECTOR3

LOCAL ptrA,ptrB,ptrC,ptrD ;Four normal-array element pointers associated with 'vex'

;Create an Array to hold our Normals

mov g_NormalMap,$New(Array,Init,0,sizeof D3DXVECTOR3)

;The array needs two Dimensions (Width, Height taken from HeightMap)

mov edx,g_HeightMap

OCall g_NormalMap::Array.DimAppend,.DXPixelmap.dWidth,ARR_MEMZERO

mov edx,g_HeightMap

OCall g_NormalMap::Array.DimAppend,.DXPixelmap.dHeight,ARR_MEMZERO

mov edi,g_HeightMap

mov eax,.DXPixelmap.dHeight

dec eax ;dont process last line in Y

mov yEnd,eax

mov eax,.DXPixelmap.dWidth

mov xEnd,eax

xor ecx,ecx

.while ecx<yEnd

push ecx

mov yCur,ecx

xor ecx,ecx

.while ecx<xEnd

push ecx

mov xCur,ecx

DbgDec xCur

DbgDec yCur

;Obtain ptrs to array elements at (X,Y) and (X,Y+1)

mov ptrA,$OCall (g_NormalMap::Array.ItemPtr,xCur,yCur)

mov eax,yCur

inc eax

mov ptrB,$OCall (g_NormalMap::Array.ItemPtr,xCur,eax)

;Calculate Vertex at (X,Y)

OCall g_HeightMap::DXPixelmap.GetPixel,xCur,yCur

shr eax,8

and eax,255

fildReg eax

fmul WorldSize

mov edx,255 ;Heightmap samples are Bytes (0 to 255)

fildReg edx

fdiv

fstp vex1.y

;

fild xCur ;Vertex XZ coords are Scaled by WorldSize

fmul WorldSize ;and Biased by Half WorldSize

fild xEnd

fld1

fsub

fdiv

fld WorldSize

fmul r4_half

fsub

fstp vex1.x

;

fild yCur

fmul WorldSize

fidiv yEnd

fld WorldSize

fmul r4_half

fsub

fstp vex1.z

;Calculate Vertex at (X,Y+1)

mov eax,yCur

inc eax

OCall g_HeightMap::DXPixelmap.GetPixel,xCur,eax

shr eax,8

and eax,255

fildReg eax

fmul WorldSize

mov edx,255

fildReg edx

fdiv

fstp vex2.y

;

fild xCur

fmul WorldSize

fild xEnd

fld1

fsub

fdiv

fld WorldSize

fmul r4_half

fsub

fstp vex2.x

;

fild yCur

fld1

fadd

fmul WorldSize

fidiv yEnd

fld WorldSize

fmul r4_half

fsub

fstp vex2.z

pop ecx

push ecx

.if ecx==0

;dont do anything first iteration

;we need 4 vertices, we only have 2

.else

DbgVec3 vex1

DbgVec3 vex2

DbgVec3 vex3

DbgVec3 vex4

;subsequent iteration : calculate surfacenormals

;The leftmost (previous) two pixels are stored in vex3 and vex4

;The rightmost (current) two pixels are stored in vex1 and vex2

;3 1

;4 2

invoke SurfaceNormal,addr vOut1,addr vex3,addr vex4,addr vex2

invoke SurfaceNormal,addr vOut2,addr vex3,addr vex2,addr vex1

invoke SurfaceNormal,addr vOut3,addr vex3,addr vex4,addr vex1

invoke SurfaceNormal,addr vOut4,addr vex1,addr vex4,addr vex2

; DbgVec3 vOut4,,"Surface Normal 4 of 4"

;We can simply sum the four resulting surfacenormals

fld vOut1.x

fadd vOut2.x

fadd vOut3.x

fadd vOut4.x

fstp vOut1.x

fld vOut1.y

fadd vOut2.y

fadd vOut3.y

fadd vOut4.y

fstp vOut1.y

fld vOut1.z

fadd vOut2.z

fadd vOut3.z

fadd vOut4.z

fstp vOut1.z

;DbgVec3 vOut1,,"Sum of 4 Surface Normals"

;and add the resulting vector to the array of vector sums

;at the same places as the four input pixels

mov eax,ptrA

mov ebx,ptrB

mov ecx,ptrC

mov edx,ptrD

fld vOut1.x

fadd .D3DXVECTOR3.x

fstp .D3DXVECTOR3.x

fld vOut1.x

fadd .D3DXVECTOR3.x

fstp .D3DXVECTOR3.x

fld vOut1.x

fadd .D3DXVECTOR3.x

fstp .D3DXVECTOR3.x

fld vOut1.x

fadd .D3DXVECTOR3.x

; fdiv r4_4

fstp .D3DXVECTOR3.x

fld vOut1.y

fadd .D3DXVECTOR3.y

fstp .D3DXVECTOR3.y

fld vOut1.y

fadd .D3DXVECTOR3.y

fstp .D3DXVECTOR3.y

fld vOut1.y

fadd .D3DXVECTOR3.y

fstp .D3DXVECTOR3.y

fld vOut1.y

fadd .D3DXVECTOR3.y

; fdiv r4_4

fstp .D3DXVECTOR3.y

fld vOut1.z

fadd .D3DXVECTOR3.z

fstp .D3DXVECTOR3.z

fld vOut1.z

fadd .D3DXVECTOR3.z

fstp .D3DXVECTOR3.z

fld vOut1.z

fadd .D3DXVECTOR3.z

fstp .D3DXVECTOR3.z

fld vOut1.z

fadd .D3DXVECTOR3.z

; fdiv r4_4

fstp .D3DXVECTOR3.z

.endif

;shadow-buffer the last 2 pixel-vertices we fetched

m2m vex3.x,vex1.x

m2m vex3.y,vex1.y

m2m vex3.z,vex1.z

m2m vex4.x,vex2.x

m2m vex4.y,vex2.y

m2m vex4.z,vex2.z

;and pointers to the destination vector-sum array elements

m2m ptrC,ptrA

m2m ptrD,ptrB

pop ecx

inc ecx

.endw

pop ecx

inc ecx

.endw

;Finally, we need to Normalize each and every vector-sum in our Array

xor ecx,ecx

.while ecx<yEnd

push ecx

mov yCur,ecx

xor ecx,ecx

.while ecx<xEnd

push ecx

OCall g_NormalMap::Array.ItemPtr,ecx,yCur

push eax

invoke D3DXVec3Normalize,eax,eax

pop eax

DbgVec3 eax,,"Terrain Normal"

pop ecx

inc ecx

.endw

pop ecx

inc ecx

.endw

ret

GenerateNormalMap endp

I am concerned that you might think that there are "accurate" normals that can be generated from heightmap data .. heightmaps do not contain enough information for this. Generating normals from heightmap data requires an underlying assumption about the surface that the heightmap data approximates.

Early in a project this isnt an issue because the programmer just wants normals that provide some level of smoothing for lighting calculations, so that things "look better"

..but when you get into more advanced things such as physical interactions with the terrain, those underlying assumptions become very important. The various methods of generating the normals (and thusly which assumption was made) each provide a different illusion of curvature and the physical interactions should idealy coincide with that specific illusion.

I would work from the reverse with a plan on using something like cubic interpolation for sampling the heighmap at higher resolutions for the physics, and as such would derive the normals from the gradiant of each vertex under that assumption.

Early in a project this isnt an issue because the programmer just wants normals that provide some level of smoothing for lighting calculations, so that things "look better"

..but when you get into more advanced things such as physical interactions with the terrain, those underlying assumptions become very important. The various methods of generating the normals (and thusly which assumption was made) each provide a different illusion of curvature and the physical interactions should idealy coincide with that specific illusion.

I would work from the reverse with a plan on using something like cubic interpolation for sampling the heighmap at higher resolutions for the physics, and as such would derive the normals from the gradiant of each vertex under that assumption.

Thanks for your comment :)

I have to agree with your statements and suggestions in principle.

But I will disagree with them for the purposes of this discourse, and in respect to this DLOD implementation.

The fundamental premise that I am working on is that the heightmap, at full resolution, provides an array of accurate point representations of the theoretical surface, and that we will generally NOT wish to sample data at 'higher than max resolution' ie interpolation (of any kind) between actual heightmap samples is unwarranted.

The heightmap in our case contains the finest representation of geometry, and we will typically be rendering with LESS than maximum resolution, effectively skipping some of our heightmap samples, and this is the important part, this is where the Error Metrics come into play.

If we say that at maximum resolution that our heightmap data PERFECTLY represents the surface (as it would at INFINITE resolution), then we can use that as a basis to determine the amount of visual error that would be produced at any LOWER resolution. We precalculate these error values at various levels of density (resolution), and then at runtime we scale them with the viewing distance, and thus determine whether a given amount of spatial error at a given viewing distance is acceptable or not.

If its not acceptable, we travel further down our tree, where the leaves in our tree represent our finest level of detail (max heightmap resolution), where the error is guaranteed to be zero at ANY viewing distance.

Attempting to interpolate further data from the heightmap beyond that point is, in my mind at least, a fundamentally flawed premise because there can be no 'new' data, and the triangles produced will tend to become unacceptably small, which will interfere with our physics code, due to numerical accuracy of floating point operations... planar functions REQUIRE an 'epsilon' value, ie Tolerance, which predicates the absolute minimum length of an Edge within our tesselation.

There are many, many ways to calculate a Normal, and I am not willing to say any of them is right or wrong. The technique I have used is not new , however the algorithm I used is four times as accurate as the conventional technique, while remaining roughly twice as fast, due to A) taking many more normals into account per vertex and B) shadowing 50% of our pixel fetches.

I have to agree with your statements and suggestions in principle.

But I will disagree with them for the purposes of this discourse, and in respect to this DLOD implementation.

The fundamental premise that I am working on is that the heightmap, at full resolution, provides an array of accurate point representations of the theoretical surface, and that we will generally NOT wish to sample data at 'higher than max resolution' ie interpolation (of any kind) between actual heightmap samples is unwarranted.

The heightmap in our case contains the finest representation of geometry, and we will typically be rendering with LESS than maximum resolution, effectively skipping some of our heightmap samples, and this is the important part, this is where the Error Metrics come into play.

If we say that at maximum resolution that our heightmap data PERFECTLY represents the surface (as it would at INFINITE resolution), then we can use that as a basis to determine the amount of visual error that would be produced at any LOWER resolution. We precalculate these error values at various levels of density (resolution), and then at runtime we scale them with the viewing distance, and thus determine whether a given amount of spatial error at a given viewing distance is acceptable or not.

If its not acceptable, we travel further down our tree, where the leaves in our tree represent our finest level of detail (max heightmap resolution), where the error is guaranteed to be zero at ANY viewing distance.

Attempting to interpolate further data from the heightmap beyond that point is, in my mind at least, a fundamentally flawed premise because there can be no 'new' data, and the triangles produced will tend to become unacceptably small, which will interfere with our physics code, due to numerical accuracy of floating point operations... planar functions REQUIRE an 'epsilon' value, ie Tolerance, which predicates the absolute minimum length of an Edge within our tesselation.

There are many, many ways to calculate a Normal, and I am not willing to say any of them is right or wrong. The technique I have used is not new , however the algorithm I used is four times as accurate as the conventional technique, while remaining roughly twice as fast, due to A) taking many more normals into account per vertex and B) shadowing 50% of our pixel fetches.

The attached image shows a terrain whose input heightmap was just 2x2 alternating black and white pixels, yielding "two hills and two valleys" .

The heightmap image was expanded during loading into a 12 x 12 pixel image via bilinear filtering.

I wanted to see how much detail I could extract from minimal input information, and without actually applying any interpolation at runtime.

Also, this is the first image which shows the trifan-quadtree technique in action.

At the moment, I am generating my tree down to the resolution of the (expanded) heightmap, and I am rendering only the Leaves, and with no culling of any kind... that is to say, the Visibility Error Metric is not yet being applied... but we can see that the tesselation is working.

The heightmap image was expanded during loading into a 12 x 12 pixel image via bilinear filtering.

I wanted to see how much detail I could extract from minimal input information, and without actually applying any interpolation at runtime.

Also, this is the first image which shows the trifan-quadtree technique in action.

At the moment, I am generating my tree down to the resolution of the (expanded) heightmap, and I am rendering only the Leaves, and with no culling of any kind... that is to say, the Visibility Error Metric is not yet being applied... but we can see that the tesselation is working.

I've added a new Tool dialog to the application's Tools menu.

It allows you to reload the heightmap from any image file and at any resolution.

There are also controls for painting the terrain height directly.

It allows you to reload the heightmap from any image file and at any resolution.

There are also controls for painting the terrain height directly.

Eye candy :)

Looks nice, you just need textures with 10x the resolution now ;)

I can obtain 10x the resolution by mapping them 10x more.

The number of times textures are repeated can be chosen in the new Heightmap editing control,

In fact, you can choose any resolution, without changing your texture detail artwork :)

Do note that all these renderings are in 640 x 480 mode !!

The number of times textures are repeated can be chosen in the new Heightmap editing control,

In fact, you can choose any resolution, without changing your texture detail artwork :)

Do note that all these renderings are in 640 x 480 mode !!

I completed the render implementation in the Terrain class, so that it renders all Leaves of our quadtree using both FFPL and PixelShader techniques.

I tested using a rather large world, not sure what tree depth.

For FFPL, the framerate dropped from around 850 fps to just 19 :|

On the bright side, that translates to a hard 367 fps in PS mode.

This is NOT acceptable, but its OK, because we are currently rendering FAR more triangles than we have to.

The further away stuff gets, the less it matters.

As the terrain gets further away, we accept less accurate LOD nodes in our quadtree... we draw less accurate geometry.

It can work! We'll worry about the dreaded popping effects later !!

Next immediate mission for me is to add a switch to allow toggle of wireframe rendering mode, so we can more clearly see the way the LOD changes with distance.

I tested using a rather large world, not sure what tree depth.

For FFPL, the framerate dropped from around 850 fps to just 19 :|

On the bright side, that translates to a hard 367 fps in PS mode.

This is NOT acceptable, but its OK, because we are currently rendering FAR more triangles than we have to.

The further away stuff gets, the less it matters.

As the terrain gets further away, we accept less accurate LOD nodes in our quadtree... we draw less accurate geometry.

It can work! We'll worry about the dreaded popping effects later !!

Next immediate mission for me is to add a switch to allow toggle of wireframe rendering mode, so we can more clearly see the way the LOD changes with distance.

I have added code to toggle WireFrame rendering via the SpaceBar.

In the attached image, you can see why (even with a relatively low heightmap resolution) the framerate has suffered so much.

There's a heck of a lot of triangles !

Less obviously, most of them are normally obscured from view, or have very small area in screenspace because they are distant from the camera, and could be replaced with less costly 'imposters'.

We can consider the geometry at any level higher in the tree than 'Leaf' depth to be such an imposter.

My immediate goal now is to implement the Error Metrics that will allow us to decide at runtime when to draw an imposter and when we need to travel all the way to the leaves of our quadtree for fine tesselation.

When that is done, I will increase the Resolution of the heightmap, which will in turn generate a larger normalmap, a deeper quadtree, and a finer tesselation of the terrain (which we cannot CURRENTLY afford due to the huge cost of the CURRENT rendering code).

In the attached image, you can see why (even with a relatively low heightmap resolution) the framerate has suffered so much.

There's a heck of a lot of triangles !

Less obviously, most of them are normally obscured from view, or have very small area in screenspace because they are distant from the camera, and could be replaced with less costly 'imposters'.

We can consider the geometry at any level higher in the tree than 'Leaf' depth to be such an imposter.

My immediate goal now is to implement the Error Metrics that will allow us to decide at runtime when to draw an imposter and when we need to travel all the way to the leaves of our quadtree for fine tesselation.

When that is done, I will increase the Resolution of the heightmap, which will in turn generate a larger normalmap, a deeper quadtree, and a finer tesselation of the terrain (which we cannot CURRENTLY afford due to the huge cost of the CURRENT rendering code).

The Error Metrics have been implemented, and Dynamic Level of Density (dynamic tesselation) is working :)

Here's a screen shot that really does no justice, since you cant see whats actually happenening.

As you approach stuff, it 'breaks up' into a finer mesh, and does so more quickly where required, with the 'flatter' areas requiring a closer viewing distance before they'll 'split'.. ie flat areas seem 'reluctant' to split.

What isn't obvious in this image is that the World now has some 'cracks' that appear and disappear as you move around, which are caused by two neighbours having too great a difference in LOD.

This image shows the problem and the fix: http://www.delphi3d.net/articles/qt_crack.gif

Here's a screen shot that really does no justice, since you cant see whats actually happenening.

As you approach stuff, it 'breaks up' into a finer mesh, and does so more quickly where required, with the 'flatter' areas requiring a closer viewing distance before they'll 'split'.. ie flat areas seem 'reluctant' to split.

What isn't obvious in this image is that the World now has some 'cracks' that appear and disappear as you move around, which are caused by two neighbours having too great a difference in LOD.

This image shows the problem and the fix: http://www.delphi3d.net/articles/qt_crack.gif

Now would be a great time to fix those cracks I mentioned.

However before I do that, I'd like to address something far more critical.

Up until now I have avoided creating an actual array of vertices representing the terrain.

I have been storing several vertices in each node of my QuadTree instead.

This is very wasteful.

Since we know we're going to store the full set of vertices for the terrain at their finest granularity, we might as well just build a huge flat array of vertices, and use indices in our tree nodes.

As well as being more memory efficient, a flat array means we're not stuck with recursion for its own sake.

It should at least make it easier for us to detect and fix T junctions :)

However before I do that, I'd like to address something far more critical.

Up until now I have avoided creating an actual array of vertices representing the terrain.

I have been storing several vertices in each node of my QuadTree instead.

This is very wasteful.

Since we know we're going to store the full set of vertices for the terrain at their finest granularity, we might as well just build a huge flat array of vertices, and use indices in our tree nodes.

As well as being more memory efficient, a flat array means we're not stuck with recursion for its own sake.

It should at least make it easier for us to detect and fix T junctions :)

I've modified my GenerateNormals procedure so that it builds a 2D array of Vertices corresponding to the Heightmap image's Pixels... well, it builds Position and Normal, but doesn't yet fill in the Texture coords.

Anyway, now that I have access to an array of vertex data, I can eliminate a great deal of code from my recursive QuadTree building method... I can build a tree based on 2D heightmap coordinates, vastly simplifying that code.

I still need to perform a World-Halving recursion from toplevel down, but I can do it in 2D relative to the Heightmap, and quickly find any Vertex via those 2D coords since those coords match our Vertex Array exactly.

For some reason, it just seems unnecessarily complicated to build the Tree in reverse (from the finest level, upwards), especially when the top-down approach was already implemented, tested and proven.