Today I began to write a GE_Creature class which implements neural network powered artificial intelligence for a managed mesh/skinmesh instance.
This led me to implement my RNG class in GameEngine (random number generator).

It is my intention that GE_Creature act as a baseclass for user-derived "game-specific NPC's".
Posted on 2009-10-30 03:28:53 by Homer
I have borrowed a lot of ideas from a demo called "Smart Sweepers" (can't recall who wrote it, but you can find the C sourcecode via Google).

The key concept is that we provide our creature's neural network with environment-based inputs (which simulate one or more physical senses such as Sight), and use the outputs of it's neural network to drive its motion. Then we create an entire population of these creatures, and 'train' them via a Genetic Algorithm.
Creatures whose neural networks produce the most desirable output (ie have the best genetics) are rewarded by allowing them to pass their genetics to subsequent 'generations', thus ensuring that the population as a whole tends toward desirable behavior.
After several thousand generations, the creatures not only improve dramatically at their prime directive, but also appear to develop interesting unscripted behaviors.
They seem to be getting smarter all by themselves !!

As soon as the base classes are completed (very shortly), I'll be creating a small demo which implements Predator and Prey behaviors: Microsoft Tigers will chase Stanford Bunnies... and the Bunnies will attempt to gather food while simultaneously evading their predators.
Posted on 2009-10-30 19:24:32 by Homer
Back from camping.

It works like this:
We have a Neural Network with 4 inputs, and 2 outputs, (and one or two hidden layers).
The two outputs are associated with "left and right turning motors" - given some environmental data, a creature will choose to drive its "left foot" or "right foot" harder, causing it to turn.
But what motivates the creature to do that?
In our case, the motors are being driven directly by the only two outputs of our very simple neural network.
And their outputs will depend apon the inputs (the creature's senses) and the internal weights of the neural network (its genes).

So again, what are we feeding to the FOUR inputs of the neural network?
We are feeding it 2 x real8-precision 2D vectors (aka Vec2_Doubles).
The first vector is the direction of most interest to the creature (explained below) in the XZ plane, and the second vector is the direction the creature is currently facing in the XZ plane.

Our creature wants to eat (or find health, or whatever)... For the purposes the first demo, I assume that the creature ALWAYS wants to eat, because I attribute each creature a 'fitness' score at the end of each generation based apon how much food it captured.
So I tell the creature the direction to the closest food item, and that is first of the two 2D inputs to the creature's NNet. If our creature has predators, I find the direction to the closest predator, and SUBTRACT it from the aforementioned vector, because the prey creature would prefer not to be eaten. Again this is simplistic, it would be better to take several closest predators into account, via several inputs.

I feed the two vectors (4xreal8 components) into the creature's NNet, and use the difference between the two output values to modify the rotation of the creature.
Finally, I use the SUM of the two output values to determine the rate of change of rotation / position ( ie angular and linear velocity)
These could be applied to a physical model, we'll see how it pans out.



Posted on 2009-10-31 22:03:53 by Homer
I've written a manager class called GE_Population which controls/manages a population of artificial lifeforms.

GE_Lifeform is the BaseClass.
GE_Vegetation and GE_Creature both derive from GE_Lifeform.
The difference is that GE_Creature has a brain :)

The code is about 98% complete as its a port of previous work, so a working implementation should be forthcoming.


Posted on 2009-11-01 01:27:50 by Homer

The Creature AI code is all completed now, it builds, but not yet tested.
You can now make populations of some kind of lifeform (flora or fauna).
Those populations which are Sentient are aware of populations of predators that might eat them, and populations of prey which they might eat - this opens the way to more interesting and complex predator/prey behaviors (without necessarily more inputs/outputs).

Should have the code implemented in the form of a binary demo soon.
Posted on 2009-11-01 08:24:49 by Homer
Yay, a subforum devoted to all things GameDev :)

Modest goal for today is to implement a Population of flora (plants).
This is achieved via GE_Population, GE_Vegetation and GE_Lifeform.
Since GE_Population instances (skin)meshes via GameEngine's Manager classes, I should be able to get some quick visual feedback (rendering of managed skin/mesh instances is automagical).

Posted on 2009-11-01 23:49:39 by Homer
Would we be able to derive meat eating flora from these objects ala LSOH? :mrgreen:
Posted on 2009-11-03 16:15:06 by rags
Sure, why not :P
Posted on 2009-11-03 23:45:23 by Homer

My AI code is not acting as expected - I'm looking into it.
Already fixed a few minor bugs left over from porting the code, its probably something silly.
Posted on 2009-11-04 02:08:00 by Homer

The AI creatures are whizzing around under NNet control, some of them seem to spin like idiots - this is exactly as seen in the early generations under the SmartSweepers demo - perfect !!

Down to one bug remaining.

After some Generations, the creature count reaches zero, it should never.
Should not be challenging to fix this.

Posted on 2009-11-05 23:32:36 by Homer
Today's major goal is to rewrite the 'genetic crossover' function such that it manipulates the input creatures instead of producing offspring for no good reason.
The secondary goal will be to track down the object leak during the 'Epoch' method.

An epoch marks a passage of time, for us it means 'one generation', whereby we will take the collective brains of a generation of creatures, mangle their brains according to heuristics, mutation and other selection criteria, and shove the new brains back into the population.

No creatures should be created or destroyed during this process.
Any resemblance to actual creatures, implied or imagined, is purely coincidental.

Posted on 2009-11-06 00:04:31 by Homer
Well, I got one done, and not the other.
Today I must determine the cause of the decreasing creature count.
Posted on 2009-11-06 19:20:37 by Homer
Fixed it :)

I now have a working demo, with critters zooming around looking for food.
It's got a couple of weird issues to look into, but its certainly stable - I ran a simulation for some hours, with generations evolving every 25 seconds.

I will post a binary as soon as I am more happy with it.

Posted on 2009-11-07 04:36:44 by Homer
I've added a new method to the "gameclient eventsink" class.
GameEngine now notifies the game client application to a change in state of a 'Player control'.
This is a good opportunity for the game to set/unset player animations, eg fire a transition between walking and idling animations.

Currently it is a requirement that the client application implement this method, but I will probably add code to make that optional (ie I can have GameEngine check if the client defined this eventsinking method in their game implementation).

In regards to my 'neural creature ai' diversion:
I'm considering to try using conventional nnet training in addition to the genetic algorithm, for 'intensive training' of infant populations of ai.

As mentioned previously, our critter's inputs are 1) the direction to closest food and 2) direction critter currently faces.
We can let the critter make its move (take its inputs, set its outputs), then determine how good or bad it was, compared to the ideal motion.
Then we can make one training cycle for that creatures nnet, telling it what the outputs SHOULD have looked like.
We can repeat this loop until the output is acceptable (tolerance here), then let the simulation continue.

This will look quite jittery at runtime, its just a tool to educate some ai creatures that can be disabled once we've saved our educated population brains to a file.


Posted on 2009-11-13 22:24:08 by Homer
Using neural networks to implement AI for games has turned out to be quite easy, very cool, and a whole lot of fun  :thumbsup:
And training them via evolutionary techniques also fun, also easy.
Of course, this assumes you already have working code for a Feed-Forward Neural Network (back-propogation not necessary).

And you can do a whole lot more than just the documented chase/evade kind of behaviors.
There's no rules for what you use for inputs to each NNet, and no rules for what you do with the outputs, and no rules about how many of each there are. You just need to remember that the processing cost for larger NNets grows exponentially, and so keep it as simple as your needs require.

For example, your inputs for a game enemy could be:
-position
-direction enemy is facing
-enemy's hit points
-how many fellow enemies are in close proximity
-scaled direction to closest human player
-scaled direction to closest health item

And the outputs could be:
-left drive
-right drive
-propensity to use magic
-propensity to flee
-propensity to attack with melee weapon
Posted on 2009-11-14 17:31:01 by Homer
I have begun the odious task of (re)implementing (captured) audio input streams in OA32's Audio Engine.
This is actually not so bad since I am just creating a mirror of the existing object chain, so that we have both input and output object chains with various degrees of competency. And the new object chain doesn't have to support 3D sound, so its even more simple.
But its a lot of work, its less fun than playing with my AI creatures (joy to my heart, candy to my eye) and stops me from being able to build my game engine until the work is complete ;)

Kinda putting a rod across my own back, but worth it in the long run, since I really need a side project to keep interest in this one, and I think a VOIP built on top of NetCom's new socket types might be just the ticket :)

And really, an audio engine should do both input and output, or its just half an engine, right?

It's interesting actually, both my networking engine (NetCom ex NetEngine) and my audio engine (D3D_Audio ex Audio) were essentially gutted and rewritten from the ground up before being included in ObjAsm32's public library, and thereafter received small extensions under a clean and sane framework in order to reimplement functionality lost during the 'miniport'.
It seems I have a pattern of getting carried away under a given framework and dont spend enough time making it pretty and keeping it tidy.
This has the result of making more work for me in the longer term, with respect to objects I submit/maintain for OA32.
Maybe I should slow down a bit and just write down furiously all the ideas that invade my brain every day, so I have a huge stack of 'todo'.
Nah, I think I should keep it balanced.
I should expect the rewrite - consider all new objects as draft submissions, and keep pushing the boundaries and exploring new fields as long as its fun to do so.

The newly-agreed formatting for comments in OA32 sourcecodes should help keep me focused on maintaining my sourcecode, and my work on D3D_Audio is about completion which always feels nice.

Yah about that - Biterider has added code to OA32's 'browser tool' so that it scans sourcecode for comments.. it displays a treeview showing ancestry of all OA32 and COM classes/interfaces, and can show you all the methods, params and comments for objects and methods and all so on.
This tool can be very handy to look up the information for any given class/interface while you are coding.

Using the OA32 plugin for RadASM makes this tool available while you are coding, and also adds tools to check one or more files for unpreserved register leaks and unreferenced local vars  ;)

I will press on :)
Posted on 2009-11-15 04:31:43 by Homer
Just wanna say - my ai critters are really fun!

They start off braindead, some of them travel in circles, some dont.
After some generations, they evolve behaviors such as rapidly turning toward nearby prey as they scuttle about.
Its really amazing and cool and I Love Neural Networks, and I want that on a Tee Shirt.
Posted on 2009-11-15 04:58:16 by Homer
I've been wondering... you have all these parts in your game engine... Physics, AI, audio, networking etc...
How do you test this sort of thing anyway? I mean, with networking code and voice chat and all, you'd need to have two machines and two people interacting to see if it actually works, right?

Perhaps you could build some small game/demo thingie so we can actually see this stuff in action, and play with it a bit? Or are there other people working on a game based on your engine, who may have something to look at and play with?
I mean, reading a blog is nice and all, but I'd like to see it with my own eyes, get my hands on it and all that. I think I'm not the only one in this.
Posted on 2009-11-15 05:22:32 by Scali
I'm almost ready for public stuff :)
Keep having little setbacks, at the moment its the MP3 player section of the audio engine.
And that component is already public (under OA32) :O bad bad priority to fix.
I've been asked to add Ogg Vorbis support, and I'm part way to implementing Capture support, but those will wait for a future version, the MP3 stuff is a critical issue.

I will at least make a commitment to share asap at least a binary demo of the ai creatures stuff!
That'll require the engine DLL, at least it will show some of the recent work.
But it will require a fair bit of eyecandy files, like about 10Mb, so if interested enough to download in the first place, I will make those media files a separate download as they'll likely be required for further demos....
You've already seen some other parts of the engine in some of the other demos I've posted (physics, iocp networking, 3D stuff inc. automatic spatial partitioning and portal generation)
Shaders, I know ur dying to ask, are an object that can be turned on and off before/after rendering an entity (not per material atm) ... and Effects are not handled at all, but then, noone's asked me yet :P
I do support methods for assembling shaders from both source and compiled snippets ;)

postscript: my Y key is hanging by a thread
Posted on 2009-11-26 00:26:37 by Homer
Major problems with the MP3 Streaming section of the audio engine.

I did what microsoft suggested, and called the appropriate methods to obtain a nice size for two 'compression conversion' buffers - one for a chunk of input MP3 and one for a chunk of output WAV.

And I assumed that the sizes would not change.
But the fact is that when we decompress a chunk of MP3, we can get more or less than what we expected!
And that can mean OVERFLOW and UNDERFLOW, in a situation where we need to keep a hardware buffer fed at all times.

My existing code analyzes each MP3 frame header, and the bit rate is not changing.
Nevertheless, the size of the output wav data from each frame fluctuates.
Umm, my existing code locks and unlocks the hardware buffer once per mp3 frame, ugh.
And I end up chasing my tail trying to maintain synch!

So, I am rewriting the relevant code so that it decompresses enough or more than i want, locks the hardware buffer once, writes it all out, and unlocks.


Posted on 2009-11-26 00:44:27 by Homer