Recently I've been using a free texture generator called 'T2'.
It's quite good, but has one MAJOR DRAWBACK.
It does not attempt to solve staircasing issues arising from the UPSCALING of a HeightMap.
The result can be seen in the attached image.
As you can see, this is totally unacceptable.
I've already changed my 'height data extraction' code to use bilinear filtered results, but it makes little difference that my GEOMETRY is smooth when the TEXTURE for that geometry is not.
Therefore I see little alternative but to revise my own Texture Generator program, implementing the best features of T2 while addressing its fatal flaw.

Is anyone interested in being involved in this mini-project?
If so, I recommend you download and install T2 and play around with it to see exactly what it is we're trying to achieve, then post your ideas and pseudocode in this thread for further discussion :)
Posted on 2006-09-08 23:13:54 by Homer
The basic idea is as follows:

We have N input heightmap images (as few as ONE).
Each heightmap has N input texture images associated with it (as few as TWO).
Each texture has a set of BLENDING ATTRIBUTES.
The attributes of each texture, along with the height data, determine the INFLUENCE of each input texel with respect to the output texel.

Let's rattle off a quick list of appropriate blending attributes...

-ELEVATION: determines the absolute minimum and maximum Heights where this Texture can exist, for example we might decide that SNOW exists from height 200 to height 255.
-SLOPE: limits this texture to appear only within the given SLOPE range, for example Grass won't grow where it's too steep.

SECONDARY ATTRIBUTES: These influence the Primary Attributes..
-ATTACK/DECAY: determines how quickly any given Primary Attribute's influence fades in and out when outside the given Range.

Finally, we need some way to determine the weighting between the input HeightMaps..
HEIGHTMAP INFLUENCE: Determines the contribution to the final output image of the per-heightmap texels we calculated.

How does that lot sound to you?

Posted on 2006-09-08 23:36:36 by Homer
if you blend snow/rock between 195-205 its only 10 steps that gets converted to 0.1 steps, which is causing staircasing, maybe you should convert heightmap to internally work with floats having a smooth range instead of steps and final output anyway is vertices that use floats
if you read in pixel at 200 and neighbouring pixel at 205 and you expand it to be bigger the pixel inbetween gonna be 202, not 202.5 if you keep working with bytes
should we also assign a minimum of 32*32 pixels are gonna participate in blend between two terrains and test with different minimums until it look good, because if its happen to be only 4 pixels between fullsnow and fullrock its dont gonna look good if we dont expand it to more pixels in either direction to smoother blend
Posted on 2006-09-09 00:54:11 by daydreamer
All of the issues you just mentioned have been addressed, with the exception of DownScaling.
The most important issue you mentioned is what I call 'hard switching' between different textures, and that's handled by the implementation of 'Attack' and 'Decay' influence modifiers.

Yes, my own code *DOES* use floating coordinates, *DOES* perform weighting between the closest integer neighbours, etc.
The real problem I mentioned with the T2 generator application is that application *DOESN'T* use floating coords when it samples the HeightMap. If it did, I wouldn't be considering writing a terrain texture generator.

The algorithm you proposed is VERY SIMPLISTIC.
That's the algorithm for weghted blending of two pixels from two textures.
What if we have N input textures, and what if the weights are nonlinear?
Sure, at the end of the day we still use the algo you mentioned, but BEFORE that, we have a much more complex algo for calculating the Weights, taking into account much more than simply the Height.

We're doing a lot more than JUST blending N textures assigned to N Height ranges, we must shift our thinking so that we consider the 'INFLUENCE' of each input texel, ie its contribution to the final image based on various attributes of each input texture and indeed the influence of the Heightmap itself, should we use more than one Heightmap (so that we can have Roads and other stuff that 'overrides' the standard blending result).
Posted on 2006-09-09 01:25:39 by Homer
homer wrote:The algorithm you proposed is VERY SIMPLISTIC.
have to start from basic working things before go advance
That's the algorithm for weghted blending of two pixels from two textures.
What if we have N input textures, and what if the weights are nonlinear?
you mean x^2 wieghts, what if we all add up N input textures and divide final result with N
letting it add together (1/N x texture11 x blendfactor1)+ (1/N xtexture2 xblendfactor2)+ (1/N x texture3 x blendfactor3)
when blending highlights

so should we implement some catmullsplines between heightmap pixels  and normal be used to be one of the weights?
weigh terrain against normals look good, mountans with grassy plains is possible with this and can blend one with vertical normals vs horizontal normals make great result with simplified n^2
there is a document on this in ATI SDK, doesnt matter its for pixelshaders the math/principle is the same

Posted on 2006-09-09 05:49:35 by daydreamer
This thread is about generating textures for the terrain in software as a preprocessing step, not blending textures in hardware at runtime. This means that rendering the terrain requires no fancy tricks, no alphablending, no special hardware capabilities, just one texture stage, and is as fast as it can possibly be.. this means we can draw more vegetation, more explosions, etc.

Adding up the pixel channel components and then dividing by N results in what I'd call 'RGB Average'.
What we really want to do is multiply each channel component OF EACH PIXEL by a Weighting factor and then add the results.
If all the Weights add up to 1.0, then no division is necessary, resulting in a true Weighted Sum.

Now, where do we get these Weighting values from?
It depends on what we want to achieve.
Sure, obviously, we can use the Height data to calculate the Weights, but why stop there? We can introduce more terms into the equation, taking into account Slope and other attributes.
We can set up further Weights to give one Texture a greater influence in the output than other(s).
Finally, we can have multiple HeightMaps, each with its own set of input Textures, and each HeightMap having a 'final weight' controlling the influence of one HeightMap's SET of Textures relative to the other(s).

What I'm really pointing out here is that we can avoid blending pixels and then re-blending them with secondary Weights in order to take into account more than just Height... we can write a UNIFIED EQUATION and derive from it an algorithm which allows us to produce this complex blending while only sampling each pixel ONCE.
Posted on 2006-09-11 23:21:04 by Homer
Update of my Terrain TextureGenerator project..
I've ripped off a number of concepts from the T2 TextureGenerator.
Credit where it's due :)

Note that I've hooked up all the GUI controls, but this is NOT a working beta, its a 'gamma release'..

I'll attempt to explain the GUI controls and what's going on behind them..

We have a list of HeightMaps.
Each HeightMap has a list of input Textures (grass, snow etc).

Basically we start by adding one or more HeightMaps and then we can add TO EACH HEIGHTMAP one or more Textures..
Selecting a HeightMap or a Texture allows you to view and edit the ATTRIBUTES of the selected entity.

For each HeightMap, we can choose an INFLUENCE value.
This value is the 'master influence per HeighMap', and is the last weighting to be applied before any pixel is written to the output file(s).
The numbers you use are totally unimportant.. your values are summed to produce a 'total influence', the ACTUAL influence applied is 'your value divided by the total'.
Example: if we have two heightmaps with influence values of 1.0 and 2.0, the total is 3.0, and the ACTUAL applied influences are 1.0/3.0 and 2.0/3.0 respectively.
Other influence values are treated similarly.

As I mentioned, each HeightMap owns a number of Textures.
These textures will be blended together (with respect to the owner HeightMap) to produce output pixels.
Each of these Textures has some variables we can use to fine-tune the output pixels:
Elevation Range (Min/Max): combined with Height values (sampled from the owner HeightMap), this determines which textures will play a part in the output.. it's how we can make snow appear on the top of mountains.
Elevation Influence: This is used to weight the overall influence of Elevation, ie give greater or lesser priority to our blending textures in regards to Elevation only.

Slope Range (Min/Max): combined with Slope values (again obtained from the Heightmap), this can be used to control where our textures will appear, and where they won't, irrespective of the Elevation.. for example grass won't grow in places where it is too steep, even if the Elevation is suitable for grass.
Slope Influence: This is used to weight the overall influence of Slope, ie give greater or lesser priority to our blending textures in regards to Slope only.

The pseudocode for the loops looks something like this:

(Drive the outermost loops)
For PatchesZ = 0 to NumPatchesZ
--For PatchesX = 0 to NumPatchesX
----Generate texture for current Patch

(Drive the inner loops)
For OutputPixelY = 0 to Height-1
--For OutputPixelX = 0 to Width-1
----For each HeightMap
------Sample Height
------For each Texture of current HeightMap
--------Sample Pixel
--------Split Pixel into components
--------Apply  weights
--------Sum components
------Apply master influence
------Recombine components to calculate output pixel
------Write pixel to output file

I would very much appreciate hearing your ideas, criticisms, etc.

Posted on 2006-09-13 04:45:34 by Homer
I've got the application generating textures now, but there's a few bugs, which is to be expected..
I've attached four images from my earliest testing.
This isn't a great-looking example, I've chosen textures to help highlight problems in the blending implementation, not to make it look pretty..
#1 is the HeightMap I used, its a topology of the planet Mars.
#2 is Texture A (Green)
#3 is Texture B (Red)
#4 is the generated output
Note the bright green artefacts which are false 'high spots'.. where'd they come from?
I haven't incorporated Slope yet either.

Texture A was assigned to elevations from 150 to 255 ('high areas are green')
Texture B was assigned to elevations from 0 to 128 ('low areas are red), and was assigned an Elevation Influence of 1.5 ('more red')
Note that there's a gap where NO textures are being applied ;)
The unassigned height range remains black.

Posted on 2006-09-13 21:38:16 by Homer
I fixed that artefact :)
Remember when you look at this texture that the input textures would normally be grass, rock, etc.
The attached image (just red, green and heightmap) shows graphically the Blending Weights that would be applied to the input textures.

The implementation still isn't perfect, but it's getting better.

Posted on 2006-09-14 02:03:57 by Homer
I'd like to point out one significant difference between my generator and T2 (aside from mine using bilinear filtering at all times), which is that mine generates output textures on a per-output-texture basis, rather than generating a single HUGE texture and then chopping it up as an afterthought.
That is to say, all the input textures are sampled with respect to the current output texture, eliminating duplication of pixel fetches.

Updated the project some more.
A critical bug was detected and fixed in Biterider's Pixelmap.GetVirtualPixel method !!!
The 'Save Project' button now works, but I haven't written the corresponding 'Load Project' code.

The GUI has some new per-texture blending controls, namely a ComboBox for selecting a 'falloff function', and an editbox for selecting the 'falloff range'.. these values are used to determine terrain influence when outside the 'Active Elevation Range'.

The algorithm used to calculate the influence outside AER was modified, I dumped my algo in favour of Yurdan Gyurchev's version.
Mine faded the Influence over the entire height spectrum, whereas his allows the user to specify the acceptable Range (outside the AER) so that its possible for the user to control how 'sharp' the transition is.. I may further modify this algo by allowing the user to specify separate 'attack' and 'decay' rates, which determine the Influence below and above the AER respectively.

I still haven't implemented Slope weighting, but you'd have to admit that this project is looking more and more like a useful tool :)
I'll worry about the butt-ugly GUI cosmetics when everything is working to my satisfaction.

Any feature requests or bugreports would be appreciated :)
Posted on 2006-09-15 01:06:08 by Homer
This update contains a few bugfixes and improvements.

LoadProject and SaveProject are working :)
Linear, Sine and Cosine Falloff are working, but I'm not sure I did it right.
Posted on 2006-09-16 11:10:27 by Homer
Another update.. more bugfuxes.. Resource images and most recently generated output image are drawn on the app window.
LoadProject now works perfectly.

TexGen is very cpu-intensive, so I might shoe-horn that code into a Thread , just so the GUI doesn't freeze up under load.
The pro is that I'd be able to show progress on the GUI, the con is that the time it takes to generate would be longer.
Posted on 2006-09-17 01:58:52 by Homer
The zip in the previous post was replaced with a further update, I've added some 'mouseover' controls which allow you to inspect the current HeightMap by simply moving the mouse over it.
Pixels will be sampled under the mouse cursor and GUI controls updated accordingly.. it's really just to give you a better idea of the range of Heights so you can set your per-texture Elevation attributes sanely.
As an afterthought, some gui controls were tweaked for position.

Some of the ObjAsm32 core files were updated, let me know if you have problems building this project and I'll supply the updated files.
Posted on 2006-09-17 04:04:49 by Homer
Even though I still haven't deeply tested the current project and KNOW it has some minor problems, I'm going to start looking at implementing the SLOPE attributes.
There's a bit of a problem with SLOPE though, which I'll now discuss.

We have two ways that we can calculate the Slope at any given point on the HeightMap.
#1 is to do it 'correctly', and #2 is to do it 'quickly'.

Method A) Extract the geometry for the terrainpatch in question (ie evolve triangles from the height data), calculate the SURFACE normal of each Triangle, and then for each Vertex, calculate the AVERAGE of the Normals of the Surfaces it shares. Ouch!

Method B) For a given point on the Heightmap, obtain the Height there, and also the Height of the Neighbour points in +X and +Z, and from those values and the world scaling value, we can calculate a 'fairly accurate' vertex normal by 'scaling the Y differential to the X and Z axial stepping distances'.

Method A is QUITE slow as it is based on many crossproducts.
Mehtod B can be ALMOST as slow due to the high degree of 'pixel oversampling'.

It seems that either method can be optimized somewhat by prefetching all the Height data for the current terrainpatch, which implies that I need to port my code for doing exactly that from my Terrain visualizer demo back into this project.
As such, it seems logical that this project should be capable of producing not just the Textures, but also the TER(rain) geometry files, which I've implied in the text that sits on the demo's titlebar.
Furthermore, it seems logical to import the whole TerrainPatch object, bearing in mind that TexGen produces textures on a per TerrainPatch basis ;)

Adding TerrainPatch support to the project implies the adding of D3D9 support, which in turn implies the eventual importation of further chunks of the Terrain visualizer project so that we can see the textures and terrainmeshes we are generating without leaving the TexGen app (which is a feature of the T2 texgen apon which this project is strongly based).

I'm feeling a little starved for feedback guys, anyone care to share?
All opinions, related links, bugreports etc are welcome!

Posted on 2006-09-18 01:05:20 by Homer

There's a bit of a problem with SLOPE though, which I'll now discuss.

Method A) Extract the geometry for the terrainpatch in question (ie evolve triangles from the height data), calculate the SURFACE normal of each Triangle, and then for each Vertex, calculate the AVERAGE of the Normals of the Surfaces it shares. Ouch!

Method A is QUITE slow as it is based on many crossproducts.

its invain to feedback if you make BIG difference between prerender and render with based on same math/algo on whatever hardware doesnt matter even if its slow SNES its prerendered for
now when you have come to exactly the same problem which the pdf for shaders show simplified math for blending based on normals to speed it up and still produce good enough visible results
ok also I been busy with my own project

I say between A and B, leaning to A but not overdo it with computations that are unnesserary because they dont produce any visible improvements
on the other hand you gonna make a serious prerendering blender you can as well go for A and leave it at overnight for highest qualitytextures
Posted on 2006-09-23 06:40:02 by daydreamer
daydreamer : Thanks for the feedback, I appreciate it.
You talk about using Normals in the blending process, I talk about Slope, its the same damn thing - Slope is a derivative of the 'change in height over distance', which is directly related to the Y component of Normals, just expressed differently (and in my case, obtained from a rise over run calculation because normals are not user friendly, most people tend to think in terms of angles, not normals)

Major update to this project.
Changes include:
-many bugfixes
-implemented code for Slope, with MouseOver realtime slope also
-now supporting launching of app via 'filetype association' - so you can associate PRJ files with the app and use them to launch the app.
-fixed a 'rounding bug' which only presented itself with certain combinations of input values
-implemented 'progress status' control
-moved TexGen into its own Thread so the GUI does not hang - this slows things down somewhat but its much more 'professional'.
-calculation of surface planes, surface normals and vertex normals
-changed TER file format to contain Geometry rather than Heightmap

VertexNormal calculation is VERY slow, because I implemented it using a bruteforce 'exhaustive search' method to find vertices which are shared across terrainpatch boundaries, ie belong to more than one 'mesh'.. the alternative is to generate one single massive world geometry and one massive texture and then chop everything up at the last moment, as in the T2 generator, I wanted to do things on a per-patch basis, which is more resource-friendly, and besides, I'm not interested in how long it takes, this is all preprocessing stuff.. what matters is that the world can be HUGE and detailed, thats really the most important to me, T2 can't create huge worlds without huge system resources, and I can.. the whole point of the exercise was to address shortcomings in T2, and I believe I'm succeeding, slowly but surely..

There's been so many changes I can't remember them all, but I did note most of the changes in the source comments, if you care..

I'm still not totally happy with the blending algo as it stands, and only Linear falloff is really working at the moment, but hey, its looking much better.. also, Biterider has made my new 'DrawDIBonDC' method redundant, so that'll be changed next update.

Posted on 2006-09-27 03:20:13 by Homer
Time for me to talk a little more about my implementation..
I'm going to describe how the generation loop works, how the Blending algo operates, and then propose an idea.. I'd sure love to hear YOUR ideas  anyway, for the purposes of clarity, I'm only going to talk about a single Heightmap which owns N textures, so please imagine that's what we've selected via the application gui..

First a little technical stuff to get you in the mood:
We'll step across the output image (output width and height).
For each output pixel, we'll loop through the input textures, grabbing the pixel at the corresponding coordinate.. that's done by transforming the output XY pixel coordinate (with respect to the output dimensions)  into a 'UV coordinate' (with respect to ONE), and then, for each input texture, we're transforming that UV coordinate back into XY (with respect to input image dimensions).
Note that the XY coords are floating point, we use a special version of GetPixel which returns the blended result of the nearest four pixels (thats called Bilinear Filtering), this means we're grabbing virtual pixels inbetween the actual pixels, which means that the input images can be all different dimensions and its not problematic for us.

Anyhow, we're grabbing all the input pixels at the same relative coordinate as each output pixel, and we're blending them, so that brings us to the Blending algo..
As soon as we grab an input pixel, we 'decompose' (break) it into RGB component values (from 0 to 255), we apply some Weights to make that value smaller, and we sum the results in separate RGB 'accumulators' by simply adding together the 'partial rgb results' obtained from our input pixels.
After we've done this for all input textures, the RGB accumulators contain our current output pixel, so we're weight-blending, there's no need to average the RGB accumulators, in fact we don't want average anything, we want weighted results, and we have them.
The magic is simple math - as long as the total of the input weights equals ONE, the results will be sane. The trick then is to calculate our input Weights carefully so that they add to 1.0, ahuh?

Let's talk about Weights..
We use another word for Weights, we can also call them 'Influences', so 'the sum of the influences' means the same thing as 'the sum of the weights'.
In terms of math and programming, there's a third name we can use to describe them.. we can call them Fractions.
Imagine that we had 6 input textures, each with an Elevation Influence of 1.0 (ie a total elevation influence of 6.0).
For sane blending, and noting that we haven't discussed all the factors yet, it's pretty obvious that we need to apply a Weight of 1/6 (=0.1666 recurring) to our input RGB values so that the sum of the Weights is 1, ahuh ahuh.
Now we grab a Height value by sampling the heightmap at the current relative pixel coordinate, and (for each texture) we use it and the remaining Elevation attributes to obtain a value between 0.0 and 1.0, and we apply that to our 1/6, so like before, we're down-scaling.
If the Height value is within the acceptable range, our downscaling factor will be 1.0 , ie no downscaling .. but if its not within that range (and assuming we want to use Falloff), the value will be smaller than one, and possibly zero.. so most of the time, our 1/6 will be made even smaller, the RGB contribution to the output pixel will be made smaller, so that we 'fade' the contribution of each input texture based on its own unique attributes.
Slope pretty much works the same way as Elevation, just using different input values to give us more control over the contribution of each input texture to the output.
I treat Slope as just another influence on the output, and I combine Slope influence at the same time as Elevation .. a 'unified equation'.
If the Slope influence is zero, the RGB contribution will be zero (for that texture) REGARDLESS of the Elevation, and vice versa.
This means that we're not 'playing the influences against each other', and still able to obtain sane results.

I'm thinking about adding checkboxes to allow the user to disable Slope and Elevation attributes, since we can't just set Slope influence to zero when we only care about Elevation etc (the unified result would be zero, which is not what we wanted).

The intention of my approach is that nothing we are doing can make pixels any brighter (than dictated by their weighted inputs).
Everything we are doing can only make them the same or LESS bright : in my mind, any anomoly should appear as very bright or very dark and be fairly easy to spot, given various test inputs.
That's certainly proven to be the case so far :P

I'll see if I can't get the remaining Falloff functions fixed tonight, so the next update is likely to be the 'official Beta release', and I'll be looking for a few brave volunteers to beta-test the app :)

Posted on 2006-09-27 07:35:34 by Homer