I have got Neural network with 6 Outputs 100 Inputs how can i know how many hidden layers i need ?? My activation function is sigmoidal.

And im using BackPropagation to learn it.
Posted on 2005-02-01 14:30:15 by AceEmbler
Trial and error :)

I've never seen a problem that needed more than 3 hidden layers, in fact hardly any I've come across needed more than the one hidden layer.

The answer depends on the relationship between the inputs and outputs, if its purely liner then no hiddens layers are needed. Most "intresting" problems won't fall into this category and in general one layer will suffice.
Posted on 2005-02-01 15:16:13 by Eóin
sory i wanted to ask how many neurons in hidden layer i need, I have got one hidden layer :-D
Posted on 2005-02-01 16:57:46 by AceEmbler
Ah yes, ok, well still though it can be interesting to experiment with two hidden layers.

As for how many neurons, well a rule of thumb is the use then same amount as either the input layer or output layer depending on which is bigger.

But to explain a bit what seems to go on so you'll be better able to judge for yourself-

A neural network maps inputs ot outputs, when that mapping is linear, as I mentioned above, then you don't need any hidden layer neurons at all :). A liner mapping is a simple one where say if input A gives output B then doubling input A results in a double output B. More specificially in math notation a function f(X) is linear if f(aX + bY) = af(X) + bf(Y).

But you don't need to worry about that, the important thing about linear mapping is that they're actually boringly simple and useing a neural network for them is total overkill :) . So when you move away form linear mapping you need hidden layers. Now my theory on whats happens is that in a best case scenario, if you have 20 hiddens then you can learn 20 completely unrealted nonlinear mappings. In practise this number can be reduced by the initial random weights given to the network (two hidden neurons could be set to learn the same thing), but also in practise the mapping are not usually entirely unrelated and so the net could learn many more.

Think of the hidden layers as the networks memory, and the great thing about NNs is that even if it hadn't learned something it trys to use what it has learned to guess at the answer. To kow how much memory it'll need you have to look at your input data, if small changes in it result in large output changes then you'll probably need lot of hidden neurons.

Am I making any sense :)
Posted on 2005-02-02 09:28:43 by Eóin
thx for the info, i have already finished this one. I had 20 hidden neurons and it was working ok (monohrome 10x10 size image recognition) but i will try 100 hiden neurons.
Posted on 2005-02-02 09:51:00 by AceEmbler