Good news.
I got the 2D version of my 'fast frustum extraction' code working.
I can now QUICKLY obtain the XZ extremes of the view frustum.
This is great! If I can get my Frustum into 2D land, I can compare it to the HeightMap directly, in order to determine a potentially visible rectangle of the HeightMap !!!
Now I need to just find the 2d Box surrounding the 2D frustum, then transform it from WorldSpace to HeightmapSpace to describe the contents of the first (root) Quad we will consider at render time... we sample the visible world, not the Whole world.
I will have to use a less accurate error metric if I dont follow a tree based on the world extremes, but the benefits look to outweight the cost, because the tree will always be far shallower, and indeed doesnt really need to be a real tree at all, can be replaced by a theoretical tree (recursive function seeking leaves but not making a tree).

The code works again by transformation of a geometry from one space to another.
Its based on a 3D version, which uses the inverse proj*view matrices to transform a Unit Cube from camera BindSpace into worldspace, thus producing (in WorldSpace) eight 3D vertices for the frustum corners, without needing to find any Planes or such (although I have code to find those quickly too).
The 2D version just defines a Unit Square in the XY plane (which is what the Camera sees when it is in its default BindSpace, looking into Z), with Z set to zero, and transforms the four vertices as usual.
Then I take the resulting four 3D vertices, and force Y to zero, leaving me with a projection of the frustum on the XZ plane at Y=0.
The idea is that we are projecting the Near Plane along the current View onto the Z Plane.
It works - if I look straight down and call my code, the result is four vertices describing a rectangle slightly wider than it is tall, with values close to zero since my camera is near the ground.
The screen is slightly wider than it is tall, which reinforces my belief that ive got it right now.
But lets wait until we can actuate the Camera and alter the View before we declare this is a fact.
If its not working, then I cant follow this implementation path.
I have to prove it better.

I've never seen code that uses this technique, its just a hunch based on logic that it will work..
Next mission : import one of my old Camera classes ..

I have a funny feeling that my square will deform and disappear as I rotate the camera, and if thats true, ill have to extract all eight 3D vertices and find their 2D boundingbox, which wouldnt be too bad.
The cost of doing that versus the savings in using 2D based frustum culling ?? Easy choice !!
Posted on 2007-11-01 09:34:01 by Homer
In order to make checking the frustum culling code more friendly and interactive, I've plugged in one of my old Camera objects and hooked up some controls... and changed the default viewing angle to something more first-person styled, so our two Triangles are laying down nicely in 3D space.
The arrow keys move and strafe the camera in line with the direction it is facing.
The left mouse paints, as usual.
The right mouse is to select a place you want to look at.
You rightclick the area of interest.. and nothing happens.. but holding the button down, and moving the mouse, the interesting area seems to slide into focus over a number of frames, no matter where we actually are moving the mouse.

What's really happening?
When we right-clicked, I calculated one of those funky MousePicking Rays, and then take note of a point on the Far Plane along that Ray...the "point of interest" is not actually on a Surface but is a theoretical point at the end of a finite MousePicking ray.
Now as we move the mouse, if rightbutton is down, I interpolate (very slightly) the camera lookat point toward the point of interest, and then update my camera's internal variables like fPitch and fYaw, apply the new view matrix, and finally,calculate a new inversematrix for future mousepicking.
I just interpolate between the Current and New LookAt positions by 0.1 per WM_MOUSEMOVE :)

It sounds unwieldy, but from an art-editing point of view, its quite relaxed to use because you can right click something, and then look away, talk to someone, sip your coffee, move your mouse a bit, and know when you look back your 3D view of the world will be as you wished, ready to paint some more.

I'll post this update shortly.
Posted on 2007-11-02 11:11:40 by Homer
I've overwritten the demo at the previous URL with the update :)

You'll notice that the Camera motions are very jerky and stepped... they are.
Camera view (rotation and position) is being updated in response to WM_MOUSEMOVE and WM_KEYDOWN.
These notifications simple, and simple is a good place to start, but they are simply not timely, and are bad choices for driving the animation.
It would be better to drive such stuff from the MessagePump.

Now I want this : User clicks and releases rightmouse, CameraView interpolates its rotation over a small amount of Time.
It makes sense to me to add a function to my Camera class to support the interpolation between LookAt points as a Driving method which accepts the Interpolation Factor directly, and to call that Method from MessagePump, which is timely.

I'll make those changes, and then it will probably be high time for me to review the architecture I have built.
Posted on 2007-11-02 12:24:05 by Homer

You only need to rightclick now, rotation happens by itself over time.
I am thinking about supporting more input textures by creating 4 larger textures at runtime and copying N input textures into each, ie, Tiling the input textures, and then screwing with the UV values.
If done right, it should work, yes?

Attached is a recent screenshot.
Posted on 2007-11-03 03:51:58 by Homer
I've hooked up the mouse scrollwheel control to change the camera's Elevation.
I switched over to using TriangleFans for rendering, but still use two big theoretical triangles for intersection testing.

As a test, I made the world much larger - 2048 x 2048 units.
Then I screwed with the UV coordinates of my Fan to make the terrain textures repeat several times across my world (so they look nicer up close).
I found that most of my textures are not suitable for TILING !! GASP !!
For a while I panicked, looking into ways to ensure my textures would Tile ok.

Then I woke up, and enabled the MIRROR texturing mode.
Now textures are Flipped when they repeat, which disguises their lack of symmetry.

Also, I had to greatly increase the camera's maximum viewing distance (Far Plane), because D3D was chopping off great chunks of World now.

Finally, I increased the resolution of the AlphaMaps from 32x32 to 64x64, and added code to Save the Combined AlphaMap to a new file, and to try to Load from it next time.

I might try pushing the World size to 4096x4096, we'll see how we go.

Anyway, we have a Big old chunk of Terrain now, with reasonably fine detail texturing, the next step for me will be to get that Macro stage out of the way.
Bearing in mind that we need 64x64 alphamap to record our detail for one Big old chunk, we can safely map 16x16 alphamaps onto one 1024x1024 super-mega-alphamap-deluxe.
That is to say, we can safely provide detailing for 16x16 Big old chunks with just one 1024x1024 MegaMap.

At the current size of 2048x2048 per Chunk, that would make the GameWorld 32768 x 32768 units in size.


Now, 16x16 Chunks could have unique BaseTextures tiled into 4x4 pages of 1024x1024 pixels per image, meaning each 16 chunks worth of BaseTexture maps onto one 4096x4096 image.
Of course, this means ensuring that the Base Texture is NOT Repeated across a Chunk.. so the extra resolution in the BaseTextures is warranted.
This provides us with a way to get even more detail into our World without requiring any extra rendering passes or loading any extra textures.

Furthermore, we can change the three non-Base textures on a per-Chunk basis, but the artist will probably be forced to ensure that the blending looks correct across Chunk borders by ensuring that unshared textures have no Weight at those borders (have already faded out).

Anyway, cost for this larger World?
SuperMegaAlphaMap : (1024 x 1024) , we need 1 for pixelshader, and/or 4 for FixedFunctionPipeLine
BaseTextures :          (4096 x 4096)  = 4x4 BaseTextures at 1024x1024, we need 16
InputTextures:          (4096x 4096) = 8x8 textures at 512x512, we need up to 4

So, we need to load about 20 textures to display any part of our large Terrain.
Those are big textures though, can we do this?

Below are some pictures of one Chunk of Terrain of size 2048x2048 with textures repeated 4 times.
Picture this as one element of a 16x16 array to imagine the Big Bad World.

Posted on 2007-11-03 12:39:09 by Homer

I've created two new classes for the demo called Terrain and TerrainPatch.

The Terrain.Init method lets us decide how many chunks of Terrain we want in X and Z, and how many times textures should be repeated across each chunk.
It then builds an array of TerrainChunks which are designed to share common Alpha and Height maps, but which can have different sets of input blending textures (if desired).
HeightMap information is applied to the five Vertices of each TerrainChunk immediately.
Rendering is performed in a braindead way (all chunks are rendered, no culling of any kind) using Indexed UserPointer method.

This presents a problem for the existing code, because we're still picking our Paint points from two THEORETICAL triangles laying flat on the ground, so every goes haywire when we try to paint the lumpy bumpy world.
I will have to perform this check whenever I render a geometry, so really it needs to be driven from the Render function of, in this case, TerrainPatch.

I expect major framerate hits as I continue, but thats ok, because this is an Editor, not a game.. I would use different flags when loading resources in that case :P

Posted on 2007-11-04 21:43:13 by Homer
The demo now switches between two behaviours.

With PixelShader turned OFF:
The geometry is rendered with a single TriangleFan.
It remains 'FLAT' at all times.
PickRay detection is versus two theoretical triangles occupying the same space as our Fan.

With PixelShader turned ON:
The geometry is rendered as an array of TriangleFans (I chose 4x4 in the latest binary)
It vertices are 'bumped' in Y, so we see a very coarse Terrain.
Heightmap info is taken from one of the input textures, just for testing.
PickRay detection is performed against each TriFan's 4 Triangles, until we hit something.

Now that I can always correctly collide my imaginary paintbrush with the Terrain's triangles, painting is working properly in all cases  :thumbsup: 8)

Attached is an update of the Binary file, with new, larger AlphaMaps in a subfolder.
You will need to move your textures into a new subfolder called 'Textures'.

NOTE : If everything 'disappears' when you enable PixelShader, do not panic.
You are 'underground' - roll the mouse scrollwheel and you'll find the terrain.
Y motion is capped in one direction, so if it doesnt work, try the other direction :P

Late change : the RightButton now has two behaviours: clicking it once tells the camera what to look at, like it did before.. but now the right button locks/unlocks motion in the Y axis, and while the right button is being held down, the target for the camera to look at is updated via the up/down cursor keys.
That means you can use right mouse button for "flying mode".

Let me know your results :)
Posted on 2007-11-05 00:10:18 by Homer
I improved the LookAt behavior a little more, and then got kicked off my computer.
A seven year old boy wanted to play 'the painting game'.
He got the hang of the controls in seconds, and within minutes was familiar with all the application controls.
I guess I've made it user friendly :P

Here's a screenshot showing the level of detail we can achieve with just 64x64 pixels worth of alphamap.
It's being applied to a 4x4 array of terrainchunks which are mapped to share one set of resources.

As we make our World larger, the resolution of the alphamap will once more become insufficient and we'll have to increase that image resolution, but nothing else needs to change.

I think its worthwhile sharing out the alphamap to a patch-array like this, because it means we get to save a world worth of edited data to a single file, and it decouples the alphamap file and its data from the complexity of the world - if we changed the number of terrainchunks and/orworld size, our edited data will still map to the World correctly.

Posted on 2007-11-05 03:39:50 by Homer
I just changed the size of the World Square from 2km to 64km.
The resolution of the AlphaMap is no longer satisfactory.
Blending still looks great sure, but our 'paintbrush' is huge !

I had to double the resolution of the alphamaps (now 128x128 pixels).
I had to increase the camera's linear velocity and I had to increase the number of texture repeats per terrainpatch  (from 4 to 12) so that the detail still looks nice up close.
I had to increase the Camera farplane distance , now set to a hefty 80 km, we can JUST manage to see the whole world from above before we hit the distance limit... but we're not likely to be often painting the terrain from that height , are we? :P

Now we have a reasonably large world, and it has acceptably fine detail once more.
If we decide that we still can't edit finely enough, we can increase the alphamap resolution further.
Remembering that it maps the terrain detail for the entire World, we're still VERY sweet at 128x128 :P

I'm still only using an array of 4x4 TriFans.
Regardless of the values we choose for this array's dimensions, each TerrainPatch acts as the Root Node of a QuadTree made from TerrainPatches.
To build this Tree, we simply create new TerrainPatches in the four corners of our existing ones, and repeat this process until we run out of HeightMap resolution.
Of course, thats over-simplifying things, I make no apology :P

As I was saying, we build this Tree to maximum resolution, but we dont need to visit all its Nodes.
At rendertime, we will tend to walk toward the leaves, but we might decide to stop a branch recursion earlier.
If we hit a leaf, or decide to quit earlier, we draw that node's geometry.

This decision to quit earlier is based apon the 'lumpyness' of the terrain, and the camera's distance from the potential new quad-origin, versus a hardcoded threshold.

If that value is selected carefully, then we should never need to check for any of the nasties associated with other roam-style LOD algorithms (Tee-Vertices, hard changes in LOD between Neighbours, etc)

Posted on 2007-11-05 04:19:11 by Homer
I like all the pretty pictures ^_^

(at least that's feedback, right? :P)
Posted on 2007-11-05 04:48:38 by f0dder
@f0dder : Anything that makes me look like a better artist than I am has to be a Good Thing... I'm terrible :P
See how bad my handwriting is?
Well, it's nice to have a bit of eyecandy, I think this place needs more of it :)
And who doesn't like eyecandy? :)

Posted on 2007-11-05 06:21:30 by Homer
@everyone : either I'm doing a really good job of this blog and nobody has any questions, or nobody really cares about this thread.
Which is odd, because it's getting loads of hits :P

Posted on 2007-11-05 09:53:02 by Homer
Standard metrics: if you are doing something right, no one responds :P
Posted on 2007-11-05 12:38:54 by SpooK
I second that ^^
Posted on 2007-11-05 20:26:17 by ti_mo_n
I'm watching, but havent looked

Some of your posts mention using triangle fans .. probably a bad idea on modern hardware for large terrains as I have seen nvidia optimization manuals stress that they are less performant than indexed triangle lists on modern gpus .. probably something to do with how vertex tranformations are currently cached on these cards (ie, indexed triangle lists offer the programmer ultimate control over the order in which they get sent to the vertex processor and allow for zero vertex redundancy) ..

Posted on 2007-11-05 23:11:41 by Rockoon
Its time to start writing the code for view-dependent lod.
My concept is to start by building a full Tree for each TerrainPatch.
The tree builder recurses until we run out of heightmap resolution (note - we can go lower since I use a bilinear-filtered pixel fetch, but theres no point in doing so).
Each node contains 5 vertices describing a trianglefan, and we precompute an error metric for based on all of the heightmap sample points which fall within this subspace, and store that per node too.

At rendertime, if we determine that a TerrainPatch is partically/fully visible, we walk that patch's tree.
At each node we scale the error metric by the viewing distance, and if the resulting value is greater than a predetermined threshhold value, we continue walking.
If we reach a leaf, or decide that the scaled error is acceptably small, we draw that node's trianglefan and return from recursion.

So how do we predetermine the threshhold value for our error metric?
That very much depends apon how we calculated it..
I'm going to implement code to calculate such an error value, and choose a value based on observed results. It's not rocket science.

@Rockoon : I'll make it work first, and play with different primitives later.
The biggest problem is that I am not ready to fill a vertexbuffer yet... this is going to be a dynamic tesselator, and I won't know all the vertices I want until the end of a view-dependent (render-time) walk of a tree that doesn't exist yet :P
As soon as I can walk that tree with respect to the current view, and find all the geometry I wish to draw, and spit that to receiver buffers, I will worry then about what kind of primitives I spit out.
I use indexed trianglefans because they most elegantly explain the quadtree of error metrics this demo will be using for micro-partitioning the geometry.

Posted on 2007-11-05 23:22:10 by Homer

@everyone : either I'm doing a really good job of this blog and nobody has any questions, or nobody really cares about this thread.
Which is odd, because it's getting loads of hits :P

:D No offence, but your posts really stink, and are very boring. I dont mean the info is boring. There IS no info. I mean the POSTs are boring. They are not very well done. Even your posts have no information whatsoever.... at least you could put some effort into the _la la la_ of it : D : D : D Make it a masterful lalala next time is my input, no so blindingly obvious

Posted on 2007-11-06 08:27:19 by Sr. SmokeALot
I use threads like this to describe my random thoughts, much as one might use a mirror.
It's as much for my own benefit as anyone else.
On the other hand, I have the benefit of being able to see the sourcecode, not just the occasional screenshot.

I always assume that if I hit enough nails, one of them will be driven home.
That is to say, if I broach enough topics, then I will draw questions and generate debate.
Particularly so, given that the subject matter is current and relevant

What kind of information would you like?
Posted on 2007-11-06 09:31:00 by Homer
Well, contrary to Smoke-a-lot I thoroughly enjoy your posts - haven't got a bloody clue what your talking about, but it's a joy 'listening' to your ramblings .

Perhaps the difficulty of penning such a narrative is lost upon those who themselves have never tried, especially when the subject material is so involved and the author has no feedback from his audience.

Keep it up Homer, I'm sure that there are more like me out here who would love to comment but dare'st not lest all doubt is removed regarding our complete ignorance.

Posted on 2007-11-06 11:41:25 by phinger
In regards to the calculating of an error metric for a given quad, its important to realize what it is that we want to measure.
We want to measure the 'bumpyness' of a region of heightmap data.
But just measuring the variation in Height is not enough.
That tells us nothing about the variation in PLANARITY :)
Lets say several neighboring triangles have the same surface-normal.
Why did we split them? Big flat areas can be represented with less geometry, right?
So, we should be measuring the variation in the (surface OR vertex) normals of Triangles that we haven't discovered yet... HUH?

It can be done.

D3D provides a function called D3DXComputeNormalMap which, given a heightmap image, returns an image whose pixel RGB channels hold the vertex normal as integer values 0-255 instead of float 0.0-1.0
Its very inaccurate, but its accurate enough for us.. in fact, the inaccuracy helps us, by ignoring small variations in normal angle.

We can perform a Vector Addition on all the normals we sample, and then perform a DotProduct on the resulting vector to get a final value.

Now we just have to do that at every node... ouch, much much oversampling.. can we improve? YES.
We said we'd build the Tree until we run out of heightmap resolution.
Lets do that, and calculate only the error vectors at the leaves, but not doing the dotproduct.
Now propagate those values back up the tree, since they represent partial values of their parents.
At each parent, sum the child vectors, then replace the child vectors with dotproducts.
when we reach root, just do the dotproduct and we're done :)

Posted on 2007-11-06 13:38:29 by Homer