In this thread, we will explore the theory and implementation of Artificial Intelligence driven by a Neural Network, in the context of Game Development (if you want to use it to do something else, be my guest).

I will begin this amazing journey with some pretty detailed and heavy duty information about Neural Networks.
If you know what they are and/or have the sourcecode for a working Feed-Forward Neural Network, then you can skip to here: http://www.asmcommunity.net/board/index.php?topic=29692.msg209747#msg209747

What can a Neural Network do for me?
That depends on how you train it, what you feed it, and how often you pet it.
Neural networks are certainly very good at some things - recognizing patterns is one of them.
They can be thought of as a mechanism of mapping a given set of input to a given set of output, where the inputs and outputs can be extremely nonlinear and unrelated. Given input X, it can tell you Y.

But what can it DOOOO? How can I benefit from using a neural network?
Neural networks can be improved to perform their task better - they can learn.
After training, a neural network becomes an expert at solving a given kind of problem.
There are at least two ways that a neural network can be improved:
1> Back-Propagation of Errors <Link Here>
2> Evolutionary Techniques: http://www.asmcommunity.net/board/index.php?topic=29692.msg209625#msg209625

Since they can essentially be applied to solve any problem posed, the question should really be "how do I pose a problem to a neural network, and how does it learn from mistakes?" <Link that too>

But, I'm really not understanding all of this. Can you just spell it out? What are Neural Networks?
Well, your brain is one.
A neural network is a network of connected cells called neurons.
So before we can understand what a neural network is, we need to define what a neuron is.

What's a Neuron?
Neurons are really a very simple kind of gate, like the primitive logic gates found in electronics.
In biology (your brain), its a cell that has some input 'wires' and some output 'wires' connecting it to other neurons. Each of these 'wires' has some 'resistance', such that some 'wires' are more active than others.
So for a given neuron, some inputs 'matter more' than other inputs do.

What's a Neural Network?
It's a massively-interconnected mass of neurons, arranged in 'layers'.
At one end of the network is the input layer, a set of input 'wires'.
And at the other end is the output layer, a set of output 'wires'.
And between them, one or more 'hidden layers' of neurons.
And between each layer, a one-to-many mass of interconnections.

This post will be continued :)
Posted on 2009-11-15 05:29:02 by Homer
Neural Networks store an array of 'input weights' (one per neuron).
Any semblance of 'intelligence' which they display is encoded within these weights.
And so, to 'improve' a neural network, we would wish to 'improve' this array.

But what is 'good', and what is 'bad' ?

As mentioned, one way of 'improving' a neural network is via evolution (genetic algorithms).
The idea is that we create a 'population' of neural networks, and pit them against each other in some virtual contest.
Then every so often, we apply 'genetic algorithms' to evolve the population, and (hopefully) improve the neural works.

Genetic Algorithm:
We use the term 'epoch' to describe the end of each 'generation'.
At this time, some members of the population will be more 'fit' than others, perhaps they earned more experience points, or something like that.
The genetic algorithm is used to 'improve' the Weights in our population of neural networks by performing three operations.

Elitism:
We will take a handful of the most 'fit' members of the population, and for each, we will make N copies, overwriting the Neural Weights of some 'least fit member'.

Crossover:
We will use a fitness-based heuristic to select two randomly fit members of the remaining population, and we will switch some random chunk of their Neural Weights.

Mutation:
We will randomly perturb the Neural Weights of the resulting pair of Neural Networks according to some threshold.

Thus, we are manipulating the weights inside the population of Neural Networks, neither creating nor destroying them.

And so, we have a scheme which leans toward whatever Neural Networks achieved the best results.
It can be thought of as a parallel convergence solver.

All we need to know now, is how do we determine which neural networks are the most and least fit?
We'll need an example for that, and I'm going to use the classic example of predator and prey because its the most simple to understand BUT NOT THE END OF THE STORY.


Posted on 2009-11-15 05:44:14 by Homer
In this example, we will create two Populations, some prey and some predators.

In this example, the Prey will be flora - and we will assume that plants don't have a brain.
The AI creatures are Predators who happen to be vegetarians - we are setting up a Primary Directive, a Goal, which states that "predators like to eat plants" - that eating a Plant is, from the point of view of a vegetarian, a Good Thing.

Each creature will have a small Neural Network.
It will have four Inputs, and two Outputs.
That means that the Input Layer of the network has four neurons, and the Output Layer has two.
Between them, we should have at least one Hidden Layer, with 6 or 8 neurons in it (from my own experimentation, neural networks tend to generally benefit from having a slightly higher #neurons per hidden layer than inputs or outputs - whichever is greater, add a bit, and use that many per hidden layer)
.
For this example, one hidden layer of 6 neurons is my suggested minimum, and one layer of eight is plenty.


So, let's talk about what kind of data the inputs and outputs can handle, and what we hook them up to, and why :)

Yay, we're going to apply neural networks to create an example of Machine Learning - our own living, breathing simulation of life itself, from which our creatures will evolve successful behaviors!
In this example, success simply means that a creature collided with a plant (ate it).
If a creature eats a plant, we'll reward it by incrementing its 'fitness counter'.
At the end of each 'generation', we'll apply genetic algorithms to train our creatures brains in-situ, giving weight to the creatures who are most fit members of this population.
But first,' let's talk about how we 'drive' our creatures, exactly whats going on with the inputs and outputs of each creature's neural network, and stuff.

Next post: NNet Inputs and Outputs, in the context of Machine Learning.
Posted on 2009-11-25 23:43:03 by Homer
Neural network inputs and outputs deal with floating point values.
Some neural networks only cope with values from 0.0 to 1.0
A good implementation can handle values from -1.0 to +1.0

I will assume we have a good implementation!

INPUTS
Now, I said our predator creatures will have four inputs, and we just learned that each will be a float from -1 to +1, k?
We will hand our creature two 2D Vectors (four inputs), which will be Normalized vectors (-1 to +1 on each axis).
The first vector will be the direction, in 2D space, toward the closest Prey.
And the second vector will be the direction in which this creature is currently looking.
Having plugged all the input data into our NNet, we can call it's Run method, and look at the outputs.

OUTPUTS
In this example, our predator has two outputs.
Just for a moment, imagine our creature is a battle tank, and that the two outputs are the signed amount of drive to apply to each of the tank's tracks (left and right).
If we assume that...
The sum of the outputs will tell us how fast the creature is moving (forward or backward).
And the difference will tell us how fast it is TURNING.
We simply take the outputs, whether they are good or bad, and derive linear and angular motions from them.
I will explain that more clearly if I am asked to.

Now the important part.
We don't care whether the creature is turning toward its prey, or moving away from it.
If a creature is displaying 'bad behavior' with respect to its primary directive, the genetic algorithm (our version of Natural Selection) will make sure that its 'genes' don't survive, it will be replaced.

Next Post: Example of PseudoNatural Selection - Genetic Algorithm Exposed.
Posted on 2009-11-26 00:12:40 by Homer
Anyone actually interested in this thread? Not many hits.. Perhaps its too advanced for most people? Requests?

Posted on 2009-11-28 05:07:57 by Homer
I like my bots dumb and manually programmed :) . I have a friend that is studying AI as a major, gave me the impression it's too hard to find optimal configurations for hidden nets and teach the net.
Posted on 2009-11-28 05:46:01 by Ultrano

Anyone actually interested in this thread? Not many hits.. Perhaps its too advanced for most people? Requests?


Way over my head :/ Very interesting though.
Posted on 2009-11-28 09:04:32 by JimmyClif
If you're trying to 'manually' train a single neural network to do a specific job, its very hard to find optimal weights.

And that is exactly why genetic training mechanisms are awesome!! And why this thread should evoke more interest, this is NOT about rule-based AI schemes, or back-propagation trained neural networks, its about converging on optimal solutions using a little randomness and a lot of processing ;)
Posted on 2009-11-28 09:19:09 by Homer
The implementation seems too simple to evoke any (subjectively) interesting behaviors.
The best predators with these inputs will always turn quickly to the prey's direction and move forward.

However, if you were to also add the closest prey's facing direction as an input; now the predators would be able to learn how to intercept as opposed to simply chase.

Perhaps different nets trained with more inputs could translate into the ~IQ of your AI. This would allow your engine to have levels of difficulty.

I would be interested in the memory requirements, how they scale with more inputs/outputs, as well as, the processing requirements (post-training) during the simulation. Is it a CPU intensive task to run inputs through a trained net?
Posted on 2009-11-28 22:17:44 by r22
I'm very interested. NN are a pet project of mine and so far you've been easy to understand.
Posted on 2009-11-30 06:58:02 by Sparafusile
This type of "microevolution" approach for molding AI behavior is indeed interesting.
Posted on 2009-11-30 12:39:52 by SpooK
Well, its time to get to the nuts and bolts of this thing.
How do we design a self-improving and self-learning system?

Most neural network classes keep all the "input weights" for all neurons in one array, I'm going to assume this.

So - we have, say, 50 critters, each has its own neural network, so we have 50 arrays of weights to mess with.
The first step of our genetic algorithm is to introduce something called "elitism".
We unleashed our critters with their brains full of randomness (the input weights are random), and some of our critters were lucky enough to collide with some food, and we increased their Fitness counter.
So some of our critters are more fit than others.
Let's take the X most fit critters, and make Y copies of their neural weights, overwriting some of the really UNFIT ones.
So for example, lets take the two fittest creatures, and make 6 copies of their genes (extreme example).
We now have 12 new creatures whose neural weights were directly copied from the best examples we had.
And we have trashed 12 really crappy creatures, leaving 38 more creatures in our original pool.

The second step of our genetic algorithm introduces something called "crossover".
We use a fitness based heuristic to choose a fairly fit pair of critters, and we pick a random value that is in the range from zero to #weights.
Now we CUT their weight arrays at that point, and swap their weights from that point onwards.
So we now have two creatures whose brains got mixed together at some random point.

The final part of our genetic algorithm is called "Mutation", and we apply it to the pair of creatures we just crossed.
It's a gentle perturbation of the weights based again on randomness.

We continue crossover/mutation until we have no creatures left, or just one left.
If theres just one left, we cant cross it, but we CAN mutate it.

So now we have 50 output creatures, same as the number we started with, except we have messed with their brains, leaning toward the most successful members.

Now we can run them for another generation (let them wander around a bit more), and repeat this process.
Each time we do, we will notice that the behavior of the creatures becomes generally more successful - they get better at what they do, and thanks to the chaos factor (randomness), they can improve further.
Of course, some will do worse rather than better, but our genetic algorithm only favors success.

Some thousands of generations later, we have creatures that actually turn and aim for their food, as if they know its a good thing, steering from one food item to the next as opposed to sitting in one place and turning in circles !!

It sounds too simple to work, but it DOES work!
Posted on 2009-12-06 22:45:15 by Homer
I suppose my biggest question is... how will you be able to control the AI in such a way that it does what it's supposed to do in the game in a reasonably convincing way?

Eg, you want your AI bots to attack the player, and you want to have behaviour in a reasonably convincing way, so they should duck for cover when the player shoots at them, and have a bit of teamwork/tactics aswell.

If you train them 'on the fly', won't that mean that they depend on interaction with the player to evolve into decent opponents, so the first few rounds of playing means the player is playing against a bunch of 'idiot' bots, who don't present much of a challenge? How long would it take before they evolve into a realistic opponent? Is there a guarantee that they will evolve that way at all?
Posted on 2009-12-07 03:58:17 by Scali
The idea is to NOT control the AI - let it control itself, let it learn on its own.
The more complex its NN (more inputs and outputs), the more complex behaviors it can evolve.
However, it can take a long time to become acceptable in our eyes, so there are some tricks we can use to accelerate and guide the evolutionary process. And the most important one is to use the player to teach the AI.

You can have a passive AI associated with the player that is being trained through conventional back propagation - when the player is in a given situation, we can encode that situation into our inputs. And when the human player makes his or her next move, we can view that as a set of desirable outputs - now we have enough information to train our AI using neural network backprop.

Now just imagine that the NNet associated with our player belongs to one or more ai opponents which are wandering around in that same game at the same time - they'll begin to mimic the behavior of the human player, which can result in ai that seem VERY clever, they appear to learn how to counter your attacks etc by watching what YOU do.

As a simple example, imagine a game of tic tac toe against an AI opponent.
Given the state of the game, the player makes a move. The AI learns to make that move, in that situation. Next time you play against that AI, it will counter you with the tactics it has learned.

Posted on 2009-12-07 05:44:42 by Homer
Well, I think you can't really escape control in a way, can you?
I mean, if you want to release a game, you probably want to have 'pre-trained' AI, right? Else the player can't really enjoy the game in its full glory until he's played long enough to have trained the AI to a decent level of sophistication.
So I suppose the developer needs to pre-train the AI and store that 'default' training knowledge in the game upon first installation.
The AI can then evolve further from there, through player input... but I don't think it will work if new players go up against untrained AI units.
Posted on 2009-12-07 09:12:19 by Scali

Well, I think you can't really escape control in a way, can you?
I mean, if you want to release a game, you probably want to have 'pre-trained' AI, right? Else the player can't really enjoy the game in its full glory until he's played long enough to have trained the AI to a decent level of sophistication.
So I suppose the developer needs to pre-train the AI and store that 'default' training knowledge in the game upon first installation.
The AI can then evolve further from there, through player input... but I don't think it will work if new players go up against untrained AI units.


As with most things, you probably want to aim for the middle.

Take some of the most basic traits that were evolved during development, mix in some other static/programmed responses, and use them together as a base to be grown upon during game play.

This way, you have roughly decent opponents/AI out of the box, and then they can adapt to each players unique style, providing a better and more challenging experience.

On top of that, you can build in a feedback mechanism that can be optionally enabled by players. Desired information could be fed to a database, analyzed, synthesized and then applied in a game patch.

A more interesting factor in all of this, for me, is in adding random events to games, big and small, that require adaptation.

Imagine you're battling it out with a bunch of creatures and then another enemy, both new and unknown to you and your immediate enemy, decides to mount an assault. How does your immediate enemy react? How will they adapt to dealing with both you and the new enemy? How do you ensure that the reaction of your immediate enemy doesn't seem too contrived?
Posted on 2009-12-07 10:24:29 by SpooK
I've said that we can include more environmental inputs, and drive more outputs.
What else might we want to drive?

And we can chain together several 'expert system' networks, driving a final 'output network'.

But our AI quickly become 'too clever', and so we need to devise ways to dumb them down, while still encouraging growth in the positive direction...
For example, we can limit the creature's range of vision.
We'll simply ignore prey which are outside the creature's range of sight.
That will teach it to move toward prey, instead of backing over them.

Now we can look at a more complex example.

Let's say we have two Input Networks... the first one maps a Situation, and the second one is trained by the Player's actions in response to a given Situation... ie, , given A= a set of environmental inputs, and B = the Players Response to situation A, produce a set of outputs which should approximate B.

And we have them both feeding into the inputs of a third Output network, producing one set of outputs.

At first this might not seem productive.
What we now have is two half-brains.
One of them becomes an expert at recognizing patterns of environmental inputs, as in our simpler example.
The other one becomes an expert at remembering the Player's reactions to given environmental situations.
And the two of them both bear apon the outputs, which tend toward B.
IE when we Train this tripod network, we tell network A that B is the desired output for input A.
And we tell network B that B is the desired output for input A...

Both of them are trained to remember "for environmental input A, the output should be B, which is what the Player did in this situation."

By separating these two concepts into two separate (input) networks, we can train them separately, and when we apply our genetic algorithm, we will be only mincing a given logical function, not the entire brain :)





Posted on 2009-12-11 08:21:37 by Homer

The important part about that last post...

For a single neuron we have one common output, and several separate inputs which may be unrelated.
For a Neural Network (NN), we have one set of outputs and one set of inputs.

When we build a network of interconnected expert systems (a network of NN's), we follow similar logic.
There has to always be a single output NN, so that whatever goes on inside the brain gets 'mixed down' into the output, like an audio mixer.
But the inputs can be highly unrelated - for a neuron, for a NN, and for a NN network.
And so how you structure your network of expert NN's can greatly influence how well it learns multiple tasks, especially in regards to the traditional balance problem : when you train a single NN with a new input, it's existing 'memory' is damaged as the new 'memory' is reinforced. By partitioning the brain (biology anyone?) we can:
A> create a hierarchical system of expert agents (sounds like a nice OS, doesnt it)
B> greatly reduce the collateral damage caused to the system's 'memory' when training the system with specific inputs (thus improving its ability to accurately respond to a wider range of inputs without increasing the resolution of neuron weights and without having to 'reinforce')
C> Produce a richer range of behaviors through a more dynamic range of expressions feeding into our outputs.


Posted on 2009-12-14 00:15:15 by Homer
I guess until someone else plays around with this stuff, I'm willing to consider this thread closed.
But I will leave it unlocked, on the premise that someone wants to ask a question, or point out some error.
Posted on 2010-02-11 04:37:53 by Homer
I was just scrolling through the list of threads here, and I hadn't noticed this one before. Not trying to revive an old thread, but there doesn't seem to be a lot of posts on this forum in general anyway.

Neural networks greatly interest me. The main guide I've seen is at the "ai junkie" which seems to come up first when "neural networks tutorial" is googled anyway. He has an interesting example of squares finding a path to food. It's very interesting to play with his example and tweak inputs/outputs, node numbers, etc.

My main question is.. how do you get a decent start in this stuff? Don't get me wrong, I've already played with code and looked at several guides and such.. I suppose it's a deeper mathematical understanding of exactly what's going on with a neural network that I lack. Seems very much like a "blackbox" with a matrix of numbers interconnecting until statistically increasing the odds via a function of some sort. (depending on the type of network.)

I've always thought it would be cool to get an AI with a kind of semi-language using symbols, similar to the way a translation parser works with symbols. Completely dependent on it's known inputs/outputs, might could learn to associate the symbols with language and get very very basic language of its own. Closest I've found to this is a book on google books called "Subsymbolic natural language processing" which had a lexicon neural network approach. It's from the 90's, but I managed to get an example from it that is still online to compile anyway.

I guess in part it depends on the goal of a project and knowing how to apply neural networks to that particular goal.

Any thoughts? How did you get your start? :)
Posted on 2010-05-11 11:00:06 by Brainiac