I wanna pick your collective brains.
Look at the attached images, and try to develop a scheme which follows these rules:
-Only Triangles (or quads, which are 2 triangles)
-More triangles near the eye, less in the 'distance'
-No 'tee junctions' (triangles all marry neatly)

Provided that these rules are observed, what tesselations could we use?
Posted on 2008-09-09 09:01:43 by Homer
you just measure the area of each quad/triangle in screen-space, and split it in 4 triangles (no matter whether it's a quad or tri) above a  certain threshold.
There's a Geometry Shader program for this, already ;)
Posted on 2008-09-10 00:22:16 by Ultrano
I've been thinking about applying the view tranform to deform a predefined tesselation template which is defined in unit space (equals texture space). I've already used this technique to extract the planes and bounding vertices of the view frustum in 3D in realtime applications, I'm just applying it to a more complex unit-space geometry.
This can be done on the cpu, or the gpu.
But I'm not describing the 'geometry clipmaps' technique (which was published by Hugues Hoppe exactly one month after I posted on This Forum about Sparse Radial Oversampling).
This is a lot faster, and eliminates some problems with that scheme, and it's a new idea based on an old one.
In the SRO algorithm, you're riding a dartboard-like pizza of sampling coordinates, so if the camera rotates but doesn't translate, we don't need to retesselate the pizza, and our framerate can be higher.
I realized that I don't need to keep all the triangles to perform simple collision detection against the terrain.
I can do it atomically, no problem.
I realized that in terms of rendering, I don't need the whole pizza, just the slice that represents the current view - and that my collision detection against the terrain FOR OFFSCREEN SPACE can be based on something other than the planes of triangles, with relatively low cost.
Posted on 2008-09-10 05:57:28 by Homer
OK before I describe what I have in mind, I'll explain a little about 'unit space projections'.

Let's describe a 2D Square in "unit space".
X values will range between +/- 0.5, and Y (or Z) values will range between 0 and 1.

Near Left = (-0.5, 0.0)
Near Right = (+0.5, 0.0)
Far Left = (-0.5, +1.0)
Far Right = (+0.5, +1.0)

Now lets grab the current View and Projection transform matrices being used by the Camera.. and let's concatenate those two matrices, and then invert the resulting matrix.
The final resulting matrix (inverse viewproj) can be used to transform coordinates from "unit space" to "world space".

So if we transform the above coords using said matrix, we get:

Near Left = (-smallvalue,CameraFrustumNearPlaneDistance)
Near Right = (+smallvalue,CameraFrustumNearPlaneDistance)
Far Left = (-largevalue, CameraFrustumFarPlaneDistance)
Far Right = (+largevalue, CameraFrustumFarPlaneDistance)

Note that this calculation didn't take world translation into account, it assumes that the camera is always located at World Zero, which is important for the algorithm I'm going to describe next.

So, we just learned how to 'deform' a unit square into a frustum-shaped trapezoid.
What if we supplied other unit-space geometries, for example a unit triangle, or a unit circle or sphere or whatever?
We can deform ANY geometry defined in unit space, effectively stretching it, giving it 'perspective'.

Now I'm going to make a statement which is not immediately obvious.
The view frustum can be described as a triangle or tetrahedron rather than as a deformed square or cube.

Now picture a dart board geometry, with the viewer always located in the middle of the dartboard... and imagine that this dartboard deforms to the geometry of the terrain as the viewer moves and rotates.
Its a fact that the viewer can only see one slice of this pizza at a time - represented by a triangle whose apex is located at the origin. We can now think in terms of our view frustum being a triangle that is moving and rotating above the map of the world.

Contrast this with 'spherical clipmaps' algorithm, it will start to make sense.
In that whitepaper, the size of triangles is based on their distance from the camera - triangles which are twice as far away are twice the size, so that they appear on the screen as roughly the same size regardless of distance.
Strictly speaking, this approximation of perspective is not accurate - the Golden Mean is not a 2:1 relationship.
My concept is to deform a regular tesselation (as a triangle, not as a square grid) using the same perspective projection that the camera uses, so that my triangles are EXACTLY the same size onscreen - our deformed array of points appears to be a regularly-spaced array when viewed through the camera.

Although I plan on using spherical mapping to define planets of terrain rather than less cool planar heightmapping, it can easily handle both.

Because we're only interested in a slice of the 'pizza' that defines the hemisphere of the planet we're looking at, it's roughly 6 times faster than the 'spherical clipmaps' algorithm, without jumping through any hoops, and possibly without the need for vertex textures to pass the data to the gpu (meaning it can work on older videocards).
For now I'm going to call this algorithm 'dynamic sparse superfrustum oversampling on the gpu' or simply 'the pizza slice algorithm'.

I've left out loads of details and I expect I'll modify this post, but thats a quick overview of the new algo in contrast with planar and spherical gpu clipmap techniques.
Posted on 2008-09-11 03:11:07 by Homer
Ultrano - two problems with your suggestion.

#1 - Geometry Shaders only work on DirectX 10, which I believe is only good for VISTA (which I dont like, I run XP SP3) although I understand there is an XP version of DX10 in the beta phase.

#2 - Geometry Shaders are only supported in hardware on GeForce 8800 and above (my best machine has an 8800 and its a rocket, what about everyone else?)

I don't think that a software rendering device is a viable solution - if I have to do stuff in software, I'll do it myself, not leave it to microsoft drivers to do it the long way :D

Posted on 2008-09-12 22:12:38 by Homer
#1 - OpenGL ;) . ATi will soon add support, too.
#2 - I doubt users with something less than GF8400 or HD2400 (both being cheapo cards) will mind not seeing things tessellated :D
Posted on 2008-09-13 02:55:13 by Ultrano

I don't think that a software rendering device is a viable solution - if I have to do stuff in software, I'll do it myself, not leave it to microsoft drivers to do it the long way :D

why not simple a virtual polyfiller, meaning it doesnt actually render them but it counts how many pixels big they are onscreen and let it feedback to tesselation proc to tesselate more if polys appear too big onscreen?
Posted on 2008-09-15 01:47:31 by daydreamer
although I understand there is an XP version of DX10 in the beta phase.
Where?

There was the unofficial and hacky Alky project but that was shut down... I wouldn't mind DX10 on XP though. But I'd much rather see prioritized I/O and the more agressive prefetcher, tbh :)
Posted on 2008-09-15 19:08:12 by f0dder

although I understand there is an XP version of DX10 in the beta phase.
Where?

There was the unofficial and hacky Alky project but that was shut down... I wouldn't mind DX10 on XP though. But I'd much rather see prioritized I/O and the more agressive prefetcher, tbh :)

well I dropped my XP game computer in the floor, all I had to choose for replacements gfx cards was dx10 cards:(, so I moved gameplay to my newest because I couldnt get dx9 to work anymore :(, so it became reduced to DVD-computer/rendernode
that would seriously be interesting to run beta dx10 for xp on that one
Posted on 2008-09-19 12:58:35 by daydreamer
daydreamer - point me at an example of a 'virtual polyfiller' please.
Posted on 2008-09-24 07:23:41 by Homer
Maybe
http://developer.download.nvidia.com/SDK/10.5/opengl/samples.html
the "Transform Feedback Fractal" ?

Or something along that idea, but entirely on CPU, and generating the object into a streaming VBO.
Posted on 2008-09-24 11:13:55 by Ultrano
Well if i was to use opengl, i have access to geometry shaders on xp, so it wouldnt make sense - I'm looking for a path that DOESNT use the cpu to tesselate, as its quite expensive to do view dependant split and merge operations at runtime on the cpu for arbitrary surfaces.
I was looking for a DirectX example.
Maybe I should think about supporting both in my graphic engine, ive written demos previously which used both so I could extend that thought process to the engine.
Posted on 2008-09-25 01:32:58 by Homer

daydreamer - point me at an example of a 'virtual polyfiller' please.

you know I like to make ddraw thingies just for fun sometimes
just a simple transform to screenpixels like you should have done if you fed it to your own software polyfiller, but it performs instead report lengths in pixels on polys/area on polys measured in pixels onscreen completely without write anything onscreen
and you could call it only for closest polys to viewer and get input like Max lengths/areas, median lengths/areas ,min lengths/areas whatever information you need and as often you want to adjust tesselation, which is maybe not nesserary by each frame
after that you can add your own conditional code to base changes of tesselation if max and median becomes too big for your taste, after testruns+ measured performance of the systems FPS, but initial measure performance would easily be possible with dx caps calls, to initalize variables with and let it tune up/down depending on fps
Posted on 2008-10-01 14:22:39 by daydreamer