I've had a new(?) idea for a dynamic level of density terrain engine (dlod) which I am calling for the moment "Sparse Radial Oversampling".
The terrain consists of a dartboard shape... a pizza-pie trianglefan surrounded by N concentric trianglestrip rings. The player never leaves the centre of the terrain, rather the terrain "moves about the heightmap with the player", and its height and uv values are continually modified.
The main things to note are that triangles become larger as they become more distant, and that relatively few heightmap pixels are looked up from frame to frame. Actual height data would be interpolated by interpolating heightmap pixels using the floating remainder of the actual vertex position.
The radial step-out value between rings probably should be related to perspective. The angular step-around value should be selected based on benchmarking experimental values.
This system achieves dlod without any need to modify the geometry.
Furthermore, it presents geometry with the graphic card in mind.

Anyone interested?
Posted on 2004-07-12 10:03:20 by Homer
you gonna have Worldmodel following a infinite plane or you have it "round", moving on a huge huge sphere?
so what about speed the heightmap up the same way you do when you do mip/mapping?, so you precalculate on what distances you need to switch to lesser detailed hieghtmap (depending on resolution)
so when you move forward, your landscape is generated in lowdetail first, then continusly is made finer and finer?
so why dont let the creation of mipmaps be hardware assisted? in other means let the hardware resize it
for example you have a 1024*1024 resolution heightmap, that rolls around, you write new landscape on pixels that was before used on landscape behind you, when the player moves forward, then call the hardware to resize it to new versions of smaller resolution, 512*512,256*256,128*128,64*64 etc
Posted on 2004-07-12 13:25:21 by daydreamer
That's pretty much what I had in mind - except there's no need for mipmapping at all. I was thinking of spherically heightmapping to a virtual sphere, using height information gleaned from test points on a large bitmap, the heightmap at full resolution, and gracefully devolving it using some simple linear interpolation and sparse testpoints. Most interestingly, the spherical approach lends itself to a novel camera modelling technique, where we move the player position on the sphere surface, then create a view matrix whose UP vector is defined by a ray from the sphere centre to the player position on its surface... we change the UP vector whenever the player moves about the sphere surface.
Posted on 2004-07-12 16:10:16 by Homer
Afternoon, EvilHomer2k.

That's actually a pretty good idea.
All vertice data is generated from a heightmap on level-start.
During frame-move, an index buffer is generated using the dartBoard technique to select which vertices to use.
Naturally, ViewPort Culling is used during the generation of the index buffer.

What a few people would need is an image explaining how this actually works.

Cheers,
Scronty
Posted on 2004-07-13 08:42:09 by Scronty
Well, I wouldn't even go so far as calculating all vertex data at level-start - I'd simply sample the sparse points in realtime - and yes, I agree, this deserves some diagrams, and if I can't be bothered redrawing them by tomorrow I'll post my authentic napkin material.
My concept is that the heightmap maps a HUGE and DETAILED world - but we sample a fragment of it at a time... I promise to post more information on this, hopefully with some good feedback we might evolve a theoretical system that sounds stoic enough to put into code. I hate pissing into the wind :)
Posted on 2004-07-13 12:07:50 by Homer
'nuff said.

Questions?
Posted on 2004-07-16 09:27:05 by Homer
sounds interesting, and the napkin is cute :)
Posted on 2004-07-16 09:47:51 by f0dder
its sometimes good to backtrack on old ideas, when you matured with your experiences in different fields
UV animation I have done recently I must try how they act on a single huge trianglefan, a single coordinate where the player stands maybe its easier to target texture move onto player correctly than trianglestrips

3d modelling exp: I realize an approach of make many rings in a 3dmodel program and use function to increase/decrease polycount to create rings adequate for different distances and final export it as one object to create the perfect terrain for this

interpolating heighmap pixels that way, should be perfect for a moving ocean, which I want


Posted on 2006-08-03 15:07:41 by daydreamer
There's one final improvement you could implement if you were to attempt the SparseRadial approach in regards to open ocean.. since the ideal geometry for water is truly curved, you could implement the geometry using Splines, for example Catmull-Rom splines.
I found them unsuitable for land masses, except for rolling green pastures.. terrain on land is never usually so perfectly curved, but at sea, splines would allow you to achieve close to infinite levels of detail.
Worth thinking about :)
Posted on 2006-08-03 21:29:41 by Homer
God damn this was a cool idea, I should revisit it in gpu code, perhaps Scali would be interested in helping to knock out a quick C version :)
Posted on 2009-12-04 23:28:09 by Homer

God damn this was a cool idea, I should revisit it in gpu code, perhaps Scali would be interested in helping to knock out a quick C version :)


Erm, sorry for asking, but was it a joke, or are you serious?
Posted on 2009-12-07 09:48:18 by Scali
Well, what would you suggest - a small vertex texture combined with a shader that generates points on concentric circles
, and the triangles between them? How can we use the GPU to height-bump a 'dartboard' array of sample points?
Posted on 2009-12-11 07:54:00 by Homer
Not sure if I understand you properly...
Basically you build a 'dartboard' out of triangles, right? And the player/camera is always in the center... so although the triangles get larger from the center to the edge of the 'dartboard', because of the perspective mapping, they will effectively be ~the same size on screen (larger triangles being further away).

Now, the X and Y coordinates of this dartboard will be fixed... but the z-coordinate will be read from a heightmap of the world. Effectively that would mean that you sample the heightmap at high frequencies for the center of the 'dartboard' (so the player/camera position), and at increasingly lower frequencies into the distance.
So far so good?

In that case, the easiest way is probably to just store dartboard with Z=0 as fixed geometry, and implement the heightmap lookup in a vertex shader. Mipmapping and bilinear/trilinear texture filtering will take care of the interpolation required for the height components.

Alternatively I suppose you could also do a 'reverse projection'. Eg... the 'ideal' triangle mesh is a regular grid in screenspace. You could back-project the vertices on screen to their positions assuming that they lie on the plane z=0. Then you use the resulting X/Y coordinates to lookup the actual Z. Ofcourse that requires some fudging to avoid points that would be projected into 'infinity'... But it might work.

I've had this idea once to do similar back-projection for 'marching cubes in screenspace'. Normally you generate the geometry in a regular 3d grid, and then project it on screen... But what if your grid is not regular, but it is in post-perspective space? Then you get 'free' adaptive sampling. I never implemented it, but I think it could work... it's more or less a 3d-variation of this idea... The grid is fixed, and you rotate the 3d volumetric world in/out of the grid.
Posted on 2009-12-11 08:19:13 by Scali