Hey, feel free to express your opinions / questions etc anytime :)

This thread will log changes I make to GameDev-related OA32 objects, and it will detail my progress as I work toward building a 3D game engine, interspersed with random gamedev-related musings, and some occasional code.

If this thread receives enough attention, I will Pin it to the top of the list (like I did for the Physics thread).
However, I will be suprised if that is the case, because I'm not catering to any particular audience here.

A while ago I wrote the beginnings of a 2D/3D Audio Engine, and subsequently let it gather dust.
It's a shame because it supports static WAV, streamed WAV and streamed MP3 to be played back to either a 2D or a 3D Listener, as well as supporting '3D Audio Environments'.

Today I dusted off my 2D/3D Audio Support code, gave it a polish, and built OPTIONAL support into the D3D_Camera class... if you don't include my Audio Support *before* D3D_Camera, you get regular D3D_Camera... but if you DO include Audio Support before D3D_Camera, you get a D3D_Camera which contains a new Init_Audio method that creates a '3D Listener' which is automatically updated whenever the Camera View changes.. essentially, we're giving the camera 'ears' so that our speakers will play whatever 3D sounds the Camera hears.
Unfortunately, there is a default D3D_Camera object embedded deeply within OA32's D3D App framework, so the Audio support code might need to become non-optional, and a little care taken to not break compatibility...

Audio support was also extended to the D3D_SkinMesh class, which is now able to set up callback events within animation tracks, so we can do things like play footstep sounds that are synchronized to the animation, or make a swooshing noise when a model swings a weapon.
In order to insert callback event keys into an animation track (as opposed to into a channel of the animationcontroller / mixer), we need to modify the animation track.. I take this opportunity to Compress the animation track, and to convert it to use Quaternion Keys.

I'm yet to extend Audio support to my Physics objects, but that's a natural step to take next.
Also, I plan to implement an object to manage Sequences of Animations for SkinMesh.
This would be in the form of an ordered Queue of instructions for the AnimationController (mixer).
During development, it would be very nice to use Biterider's new ScriptingHost to allow for external runtime manipulation of animation sequences.



I've been spending a lot of time looking at various Game AI implementations (especially Monolith's F.E.A.R), as well as experimenting with my Neural Network code.

Also, I've been thinking a lot about how the World will be represented within the Game engine.
In the end, there will be a network of Cell nodes with Portals linking them together... ie, a "Cell and Portal" engine ... but not necessarily using Portal Rendering as it is commonly known.
So far, I am leaning toward doing the following:
-import World as triangle soup
-generate BSP Tree (Quake style: empty nodes, convex clusters of triangles at the leaves)
-generate Portals between Leaf nodes
-generate Cell Network from Leafs and Portals
-write Cell Network out to a custom file

We don't need the BSP tree anymore... we can keep it, or not.
We're perfectly able to calculate PVS , either at runtime or as a preprocess, using only our Cell Network.



Posted on 2009-04-12 10:21:27 by Homer
How about a NavMesh?
http://www.ultranos.com/OpenIL/src/ILX/ILXNavMesh.cpp
http://www.ultranos.com/OpenIL/ILX.h
The A* search and other AI-supporting funcs are not implemented or defined there yet, but it's usable for character navigation so far. I'm still a noob at these things, and am implementing only stuff I currently need, so I may be off-base with this NavMesh ^^'.
Fortunately, I've confirmed it's extremely easy to draw, use, expand. I.e even if it's squashed in 2D, it can describe a 50-floor skyscraper's paths and pass on a rough Z value for further refinement to the bumpy detailed meshes of the level. And an artist can make the geometry for that skyscraper in under 5 minutes. Controlled input is better than manually filtering heaps of procedurally-generated output.
With a flag on edges, we can specify if they are see-through or walls - to compute visibility input for the AI. All simplified to 2D, with possibly only a few side-effects. Though, I don't like thinking in terms of first-person-shooters.
Posted on 2009-04-12 17:01:42 by Ultrano
Well, theres nothing wrong with using a NavMesh, but someone has to generate it, and it represents a whole bunch of extra geometry.... for essentially 2D maps like heightmapped terrains, using influence maps is a better option.
If we choose NavMeshes, it would be better to automate the generating process, and take the responsibility away from the 3D artists, leaving them to concentrate on what they do best.
I've noticed that some games had begun to implement "NavMesh Generators" which work by identifying chunks of "walkable" triangles and attempting to string them together, and using a heuristic to discard 'unwanted islands'.

Anyhow, I was more referring to the 'macro' level of World representation.

One common scheme for AI pathfinding is the use of "waypoints"...
I drew a parallel to Nathan Whitaker's "Extracting Connectivity Information from a BSP Tree".
Nathan understands that the leafnodes in a BSP tree represent convex subsets of triangles defining convex subspaces (Cells), and his algorithm can be used to discover the shape of the Portals which connect these subspaces... these are the 'holes' through which entities may travel between Cells.
Having extracted a list of Cells and their connecting Portals, it's easy to see how this information can be built into a NODE NETWORK - a closed tree - at this point, the BSP tree can be discarded, since hardware occlusion culling is faster than walking the (possibly huge) bsp tree these days... and we can rely on portal based culling if we're that concerned.
My concept is to grant each Node a 'score' based on its contents, and apply A* (maybe Coordinated A*) pathfinding to this Network in order to direct NEEDS based AI to appropriate GOAL CELLS.

It's important to recognize that not all the Cells we generate will actually be reachable for ground-dwelling AI.
I don't believe I would have any difficulty in QUICKLY extracting a NavMesh for each Cell based on the limited set of Triangles within each Cell, and which Edges touch which Portals at the floor level.

But I don't expect my AI to always be confined to the floor.

Posted on 2009-04-13 02:09:38 by Homer
I feel that auto-generated navigation is bad, as the published implementations I've seen are horrifying. I.e in HL2/CSS:
http://developer.valvesoftware.com/wiki/Bot_Navigation_for_Counter-Strike:Source
Ingame, viewing and editing of these "nav-meshes" can be enabled, to see how awfully unoptimal results become. At least in HL2/CSS, that is. Tiny arbitrarily-rotated and scaled quads with 0-256 adjacent nodes and a bunch of controlling flags. A small map: 2MB+ of compressed navigation data, while I think it could be done with several orders less nodes. The thing is, that thanks to this imperfect autogeneration, the graph fed to A* is too loaded probably and effectively one can't put many smart bots even on a gamer's highest-end PC.
Posted on 2009-04-13 03:32:49 by Ultrano
When I think of a NavMesh, I am reminded of the Neighbouring Triangles data available in a .X model ....
I believe that I can build the NavMesh directly into the World mesh, by simply discovering and tagging the walkable triangles, thus forming a finite Tree of triangles and their edge connections.
Posted on 2009-04-13 07:20:16 by Homer
Fixed a bug in StreamingMP3Sound class.
Released my Audio.inc file to Biterider for inclusion in the next OA32 update.
Implemented a simple Console for taking text input at runtime.
Annoyed with a bug that causes rendering to cease when app window is resized.
Studying for implementing game AI with classic multilayered feedforward back-propagating Neural Networks.
Posted on 2009-04-14 21:43:09 by Homer
I've been studying something called 'Neuro-Evolution'.

You create a population of NeuralNetwork-driven AI critters.
Learning is achieved, however it is not achieved through the usual Training (back-propagation) method.
After some time, you cull the herd, crossing and mutating the 'genes' of the fittest survivors.
These 'genes' are in fact the Weights arrays inside the Neural Network of each critter.

This scheme allows us to implement 'negative reinforcement' since we have total control over the definition of 'fittest survivors'.

It's certainly food for thought :P

Posted on 2009-04-15 07:43:52 by Homer
A friend of mine recently played with a solution like that for the Traveling Salesman Problem.
The idea was that a more efficient path could evolve from previous paths. So you start by randomly generating a set of paths.
Then for each path you randomly change part of the path, this is the evolutionary step. You then take the shortest X paths (the 'fittest survivors'), and do more evolution. This should eventually converge/evolve into a reasonbly optimal solution.
An interesting aspect is that evolution is a parallel process.
Posted on 2009-04-15 09:23:04 by Scali
I thought a few people might be interested to see example code for a simple neural AI, so I've attached some files.
This example implements a population of AI critters which evolve the behavior "seek food".

NNet.inc contains the code for a Neural Network class.
RNG.inc contains the code for my Random Number Generator.
Genetic AI Controller.inc contains the example code, which is comprised of several classes:

-AIBaseObject (simply describes a Vec3 position)
-Plant (derived from AIBaseObject, represents food for our critters)
-AICreature (derived from AIBaseObject, represents an instance of an AI critter)
-GAIController (drives the simulation)

The AICreature class embeds a NNet object (its "brain"). Inside the NNet is an array of weights. The GAIController class drives a population of AI critters for one lifespan, then applies a genetic algorithm to produce a new (hopefully improved) generation by screwing with the weights of the current population.

If we look closely at the GAIController.Epoch method, we can see that the algorithm works as follows...
ELITISM : create some new critters whose weights are directly cloned from the NNets of the N fittest members of the current population.
CROSSOVER : create some pairs of new critters whose weights are substrings cloned from the NNets of pairs of reasonably fit members of the current population.
MUTATION : deform the weights of the new critters that result from crossover.

Each AICreature has a very simple neural network, with four REAL8 inputs (as 2 x Vec2), and two REAL8 outputs.
We apply two inputs to the NNet: a vector from the critter to its closest food, and a vector representing the direction it is currently looking. Then we run the NNet and grab its two outputs.
The two outputs are used to imply left and right turning forces.
We use these to calculate change in the critter's rotation, speed and position.
GAIController drives the critters, and watches for collisions between critters and food.

It's important to note that we don't tell the critter which way to turn, or that food is good !!!
If a critter hits some food, we increment its 'fitness' score.
The genetic algorithm breeds new generations which tend strongly toward the traits of the fittest critters of the previous generations, thus the system favors critters which turn in the direction of the food, rather than away from it... critters whose behaviors don't result in them running into some food will soon find their 'genes' culled from the gene pool.

It takes quite a few generations to evolve 'apparently intelligent' behavior.... no less than 200.
After some 2000 generations, these guys will become incredibly efficient food gatherers, aiming and moving with apparent purpose.




Posted on 2009-04-18 23:50:00 by Homer
It occurs to me that, at least in this example case, we can help these guys learn even faster, by using classic back-propagation at the appropriate moment...

We can find the signed angle between two vectors using a dot product.
In our case, we want to turn left if the sign is positive, and right if the sign is negative.

When we check the outputs of the neural net, we can determine whether the nnet is telling the critter to turn in the correct direction or not, and if not, we can call the AICreature.Brain.train method to tell it exactly what its outputs SHOULD have been, in a loop, until it produces ACCEPTABLE output, then continue.

Posted on 2009-04-19 00:20:46 by Homer
I think its FAIR to apply classic back-propagation (training) to help evolve ai expert systems that are receiving environmental input and being genetically evolved on their response to it.
After all, it's quite natural for an inexperienced human to overshoot their goal and perform corrections, particularly when synchronizing some action with their vision system.
Posted on 2009-04-21 02:58:56 by Homer
Added new method: D3D_Camera.GetPickRay calculates a ray extending from cameraspace (the screen) into the 3D world. The ray can be returned either in WorldSpace or in some BodySpace, which allows us to deal with arbitrarily oriented entities.
Example uses for this method are Selecting of 3D entites via the mouse cursor, and testing whether a player is aiming his line-of-sight weapon at an appropriate target.
In one demo, I used it to allow the user to 'paint' a blended texture onto a 3D terrain, by performing triangle/ray intersection tests, calculating the exact point of intersection on a triangle, and thus finding the Texel Coordinates at that point on the triangle.
Posted on 2009-04-21 11:32:06 by Homer
The physics framework has been a great exploration into computational geometry.
I am beginning to put together a "Geometry Toolkit" includefile containing all kinds of geometry-related functions such as intersection, geometric clipping, splitting and merging functions, eventually supporting all combinations of the typical geometric primitives.
This file will be donated to the ObjAsm32 package at some near point in the future, but you'll find a preview available here in this Thread within the next week or so.

Posted on 2009-04-23 01:20:16 by Homer

Implemented 'mouse pick ray' selection of mesh instances (D3D_MeshManaged objects).
Extended D3D_Mesh baseclass to support transparent materials. Also tried (without success) to have D3D_Mesh.LoadFromXFile detect when a mesh has no Vertex Normals and clone a mesh that does (for lighting purposes).

Biterider has taken a scalpel to the D3D_Window ancestor class in order to remove the embedded D3D_Camera object, and moved the camera audio code into a derived camera class.

My immediate goals will be to implement some runtime tools for manipulating selected mesh instances, and to re-implement my xfile-to-bsp code with a view to extracting cell-and-portal connectivity information from the compiled bsp tree.

Posted on 2009-04-25 22:51:29 by Homer
Wrote a manager class to draw rectangular portions of textures to the screen (via ID3DXSprite). It supports alphablending, of course, and should be a big help for creating HUDs and other 'flat' onscreen displays, you know, the 'game stuff' that is drawn 'on top' of the Scene.
If you wanted to create a fancy-looking control, you simply need to add (and thus draw) its components in the appropriate order: first a 'mask', then the control border, then the control content.... all these can be sourced from a single texture :)




Posted on 2009-04-28 04:39:40 by Homer
Extended D3D_TextureManager to properly support color key transparency - especially the 'Restore' method. This is a very minor work but does affect the D3D_Mesh class.

Implemented the 'Screen Sprites' class I mentioned in the previous post - works like a charm, really good with 24bit colorkey images, if you use 32nit (w/alpha) images then the colorkey is ignored and you get per-pixel alpha instead. Very nice for making onscreen display.
Posted on 2009-04-30 18:52:36 by Homer
I've asked my girlfriend to try to put together a nice artistic gamedisplay texture for me to toss at the new SpriteManager. She's talented, I'm not.... at least not where art is involved.

Starting rewriting BSPGen, started extending console for grammar, I'd like to be able to issue runtime commands to create and manipulate entities via console, instead of hardcoding everything.
Immediate goal is to hook up enough console commands to load,save and very basically edit a "game level" (3d worldscape), I'll probably end up going console crazy and extending advanced features to the console. Being able to load, save and edit a "world" will be a milestone in itself.

Funny, this is all code I implemented years ago.
But I wasn't working in an object-oriented source environment.

Looking forward to making quick progress so I can play more with AI.


Posted on 2009-05-01 08:28:24 by Homer
Consider using the new ScriptHost. It is much more flexible than a simple command line. You can loop, take logical decisions, load, save, position your meshes, etc, etc.

Biterider
Posted on 2009-05-01 14:43:41 by Biterider
I sure plan on playing with ScriptHost very soon in that regard, thanks.
I've stopped working on the console - at least I can use it to issue named scripts :P

Working on a revised BSPGen.
Having a few problems that seem to be caused by/in the DbgVec3 macro!
Posted on 2009-05-02 02:50:21 by Homer
Wrote dedicated code to generate a 'Quake-Style' BSP Tree from an input 3D model (splitting-planes at the Nodes, triangles at the Leaves).

Portals are the 'holes' in the world which join one place to another, doorways are a good example.
The BSP Tree generator has discovered all the 'rooms' in the 3D model, now I am trying to discover the Portals which connect those 'rooms' together.

Wrote code to generate a large number of 'portal fragments' from the splitting-planes of the Tree, walk them through the tree until each arrives at a leaf node, and to filter the resulting lists of fragments for obvious fakes (portal fragments must exist in two Leaf nodes, otherwise they are junk).
All that remains of Portal Discovery is to trim the remaining fragments against the faces in the two Leaf nodes which are connected by a given portal (... a portal touches two leaves, clip it against their geometry).

Once I've built a list of Portals and mapped which Leaf nodes they connect, I can theoretically discard all of the nodes in the Tree except for the Leaves... I can build a Cell and Portal Network from the Portal/Leaf mappings.

(I don't intend to use the BSP tree for visibility or rendering, just to construct my Cell 'n' Portal graph).









Posted on 2009-05-06 07:52:16 by Homer