Hi,

Where I can get some basic (and not) information about neutral networks (how it is organized, for what it is, ...) and sound processing (stream WAV, various processings (vibrato change, ...))

I will be grateful to any information.

Where I can get some basic (and not) information about neutral networks (how it is organized, for what it is, ...) and sound processing (stream WAV, various processings (vibrato change, ...))

I will be grateful to any information.

I played with Neural Networks about 3 year ago.. but back then i didnt have as much skill in such areas, now a days university keeps me playing with robots instead.

So what i can tell you is a bit lean, and probably a bit out dated. I seem to remember BogdanOntanu saying he had alot of experience with this stuff a couple of months back, you might want to try him as well.

All that i remember is:

Neural nets hav multiple inputs, and typically one output (not limited). They consist of a network of nodes, grouped like a matrix, where each "column" would be a "layer" of nodes.

Each node is a summing junction.

Each layer of nodes has a path to every node in the next layer. (it starts to look like a sting puzzle). Every path, has a decmal number between -1 and +1. The more precision the better!.

Better go picture time here:

Every "path" gets a constant multiplier, such that the sum from the node it begins on get multiplied by a decmal and sumed into the next layer's noded. When each layer is passed the final output is your "logic" based on the input conditions.

Long story short. The magic is in those multipliers AND the # of NODES and LAYERS used. Chosing random #'s for the multipliers will not work!!. They need to be "Trained". This means you apply the input conditions, see the output result, deduce the error and then apply a "Sigmoid" function (nonlinear trig type function) that will "try" to readjust the #'s between the layers. Then the process is repeated, hundereds of times untill the Sigmoid function gets the right #'s and percision to produce the output desired.

Downside is it takes alot of work and time trianing the mutlipliers to do a task.

The upside is, once you figure it out, and training is done. You can save the #'s and where they belong, and you can build a hundred copies, fully trianed in a second!

Im told this is how "smart" video camera's "know" how to adjust the digital image when you get a bump or a jossle in your recording. They spend 1000's of hours shaking a prototype camera and retraining its matrix untill it worked. Once the R&D is done, they hard code the #'s in silicon, and mass produce their video camera.

Hope this helps.. I can't give you any refernces, but i there is enough there to get you going in the right direction with google searches.

NaN

So what i can tell you is a bit lean, and probably a bit out dated. I seem to remember BogdanOntanu saying he had alot of experience with this stuff a couple of months back, you might want to try him as well.

All that i remember is:

Neural nets hav multiple inputs, and typically one output (not limited). They consist of a network of nodes, grouped like a matrix, where each "column" would be a "layer" of nodes.

Each node is a summing junction.

Each layer of nodes has a path to every node in the next layer. (it starts to look like a sting puzzle). Every path, has a decmal number between -1 and +1. The more precision the better!.

Better go picture time here:

Every "path" gets a constant multiplier, such that the sum from the node it begins on get multiplied by a decmal and sumed into the next layer's noded. When each layer is passed the final output is your "logic" based on the input conditions.

Long story short. The magic is in those multipliers AND the # of NODES and LAYERS used. Chosing random #'s for the multipliers will not work!!. They need to be "Trained". This means you apply the input conditions, see the output result, deduce the error and then apply a "Sigmoid" function (nonlinear trig type function) that will "try" to readjust the #'s between the layers. Then the process is repeated, hundereds of times untill the Sigmoid function gets the right #'s and percision to produce the output desired.

Downside is it takes alot of work and time trianing the mutlipliers to do a task.

The upside is, once you figure it out, and training is done. You can save the #'s and where they belong, and you can build a hundred copies, fully trianed in a second!

Im told this is how "smart" video camera's "know" how to adjust the digital image when you get a bump or a jossle in your recording. They spend 1000's of hours shaking a prototype camera and retraining its matrix untill it worked. Once the R&D is done, they hard code the #'s in silicon, and mass produce their video camera.

Hope this helps.. I can't give you any refernces, but i there is enough there to get you going in the right direction with google searches.

NaN

Thanks for the description.

I'm having trouble imagining what's being meant with a "Neural Net", does anyone have some clearer info somewhere? Some linkage? :confused:

Hiro, its design was modelled after the human brain's interconnections. Somehow through billions of nodes and interconnects, decisions are made, and learning is performed.

Some mathematical wizard came up with the above model, to represent this. (At scale less complex).

What is "learned" by the netwok is from itterative experience, and a feedback routine to "adjust" the constant multipliers between nodes.

The catch is adjusting some numbers to make one output proper from, a giving set of inputs, may totaly throw off the output for another set of conditions. This is because the input values will transverse every non zero path regardless of the input values (ie. there is not .if .endif decisions here, only Sums of Multiples from input values.).

A fully trained network is an equilibrium of preciece and purposeful decmal places in each multiplier, such that, for example. I apply A=1, B=0, C=1, the output is 1. This would be ez to do if this was the only input condition to be concerned about.

In theroy, you could apply an infinite # of combinations different combinations (fractional inputs), but for simplicity i will assume A,B,C is BINARY only and thus there is 7 more conditions to "train" for.

Since there is 7 other possible input states, the multipliers must be adjusted to produce the all the proper results, when tested with all the input conditions. NOW this is complex!

I will "try" another example... :)

Some mathematical wizard came up with the above model, to represent this. (At scale less complex).

What is "learned" by the netwok is from itterative experience, and a feedback routine to "adjust" the constant multipliers between nodes.

The catch is adjusting some numbers to make one output proper from, a giving set of inputs, may totaly throw off the output for another set of conditions. This is because the input values will transverse every non zero path regardless of the input values (ie. there is not .if .endif decisions here, only Sums of Multiples from input values.).

A fully trained network is an equilibrium of preciece and purposeful decmal places in each multiplier, such that, for example. I apply A=1, B=0, C=1, the output is 1. This would be ez to do if this was the only input condition to be concerned about.

In theroy, you could apply an infinite # of combinations different combinations (fractional inputs), but for simplicity i will assume A,B,C is BINARY only and thus there is 7 more conditions to "train" for.

Since there is 7 other possible input states, the multipliers must be adjusted to produce the all the proper results, when tested with all the input conditions. NOW this is complex!

I will "try" another example... :)

That is definitly heavy stuff

I did some stuff in debug that was recursive.

Like my program changed itself at runtime in order to get a desired effect.

It worked well.

Then i read that a program that changes its code segment during runtime was a bad policy.

After reading this thread i'm not so sure.

just my two pence

I did some stuff in debug that was recursive.

Like my program changed itself at runtime in order to get a desired effect.

It worked well.

Then i read that a program that changes its code segment during runtime was a bad policy.

After reading this thread i'm not so sure.

just my two pence

Wonderful NaN!

Have you write some neural network or a intelligent program in

asm?

Have you write some neural network or a intelligent program in

asm?

Consider the output equation:

let E() == Sum of : E( A, B) = A + B

Output = E( c5 * E( c3* B, c1 * A), c6 * E( c2 * A, c4 * B) )

Here you have a parametric equation!

Mathematics will tell you that if you have

if m=n --> The system is solvable with one unique solution

if m>n --> The system has an infinite number of possible solutions.

if m<n --> The system CAN NOT be unsolved, OR there exists only one unique solution.

From this example we have 4 equations: The above Output equation 4 times over with the inputs A, and B, and the Output resembling the XOR trueth table!

But whe still have the 6 other unknowns (c1, c2, c3, c4, c5, c6)!! This means there is MORE unknowns than equations! Thus an infinite # of solutions exist to make this net act as an XOR when BINARY values are placed on the inputs. Because this is true, doesnt mean finding one of the infinite combinations is ez. There is just as many solutions that WONT work. (And Entrophy is NOT on our side :) )

Futher more, the equations are not able to be solved by finite methods (ie matrix Ax=b stuff), Here is the equations expanded:

A=1,B=0, Out=1=c1.c5+c2.c6

A=0,B=1, Out=1=c1.c3+c4.c6

A=0,B=0, Out=0=0

A=1,B=1, Out=0=c1.c3+c1.c5+c2.c6+c4.c6

The way the Constants are found are by iteratively applying a mathimatical training formula that will adjust constants based on the error, such that the new number will NEVER exceed +1 or -1. Keeping the numbers within fractions keeps stability in the network!! (Anyone who knows discrete systems will see this as an analog to the Unit-Circle of feed back control loops). Stability meanin that network is trainable to converge to desired outputs from inputs. Multipliers over 1 will cause the output value to grow out of control as it transverses from layer to layer.

Try And chose numbers for c1-c6, and make it work.. as an xor... its TOUGH! (Even tho you do have the 3rd eqn done for you!, any values will work!)

PS: HIRO, check out this link, I dont know if theres anything good, but he's a prof at my university that is internationally known for his Neural Net research.. (( I've never had any of his classes tho )) --> Here

NaN

let E() == Sum of : E( A, B) = A + B

Output = E( c5 * E( c3* B, c1 * A), c6 * E( c2 * A, c4 * B) )

Here you have a parametric equation!

Mathematics will tell you that if you have

**n**equations, and**m**unknowns to solve for, then:if m=n --> The system is solvable with one unique solution

if m>n --> The system has an infinite number of possible solutions.

if m<n --> The system CAN NOT be unsolved, OR there exists only one unique solution.

From this example we have 4 equations: The above Output equation 4 times over with the inputs A, and B, and the Output resembling the XOR trueth table!

But whe still have the 6 other unknowns (c1, c2, c3, c4, c5, c6)!! This means there is MORE unknowns than equations! Thus an infinite # of solutions exist to make this net act as an XOR when BINARY values are placed on the inputs. Because this is true, doesnt mean finding one of the infinite combinations is ez. There is just as many solutions that WONT work. (And Entrophy is NOT on our side :) )

Futher more, the equations are not able to be solved by finite methods (ie matrix Ax=b stuff), Here is the equations expanded:

A=1,B=0, Out=1=c1.c5+c2.c6

A=0,B=1, Out=1=c1.c3+c4.c6

A=0,B=0, Out=0=0

A=1,B=1, Out=0=c1.c3+c1.c5+c2.c6+c4.c6

The way the Constants are found are by iteratively applying a mathimatical training formula that will adjust constants based on the error, such that the new number will NEVER exceed +1 or -1. Keeping the numbers within fractions keeps stability in the network!! (Anyone who knows discrete systems will see this as an analog to the Unit-Circle of feed back control loops). Stability meanin that network is trainable to converge to desired outputs from inputs. Multipliers over 1 will cause the output value to grow out of control as it transverses from layer to layer.

Try And chose numbers for c1-c6, and make it work.. as an xor... its TOUGH! (Even tho you do have the 3rd eqn done for you!, any values will work!)

**I stop here, because i dont have any "training" equations off hand to lecture about.**PS: HIRO, check out this link, I dont know if theres anything good, but he's a prof at my university that is internationally known for his Neural Net research.. (( I've never had any of his classes tho )) --> Here

NaN

Oh ya.. getting back to my first statemt!

This makes you wonder about the chicken and the egg story!!

Which came first??

For US to have a brain and us it, we apply inputs (eyes, touch, etc), and it progresses through a FRIGGING MASSIVE NETWORK in about 10-20 times a second, giving us our responces to these inputs.

But this also means, there is another NETWORK training our brain's network when we learn something new!!! A neural logical Just-In-Time compiler, that globally will reprogram our heads as we CHOOSE to or not.

So what comes first our programmer, or our program?

Also, i attribute dejavu (if i spelled that right :) ) to the one of the infinite input combinations accidentally triggering a responce that was the by product of other programming.

I'll explain: If you solve a solution for the XOR problem, and it works, great. Now place 1/2 and a 0 in, and see the output. This is the "by product" from the LEARNED conditions. You havent taught it how to respond to 1/2 and 0! But it still has an answer.

So as i see it, when i have dejavu, My programmer says, "this is new!", but at the same time my program says.. "Ya! I got and idea of what this is about... "

........ I

NaN

This makes you wonder about the chicken and the egg story!!

Which came first??

For US to have a brain and us it, we apply inputs (eyes, touch, etc), and it progresses through a FRIGGING MASSIVE NETWORK in about 10-20 times a second, giving us our responces to these inputs.

But this also means, there is another NETWORK training our brain's network when we learn something new!!! A neural logical Just-In-Time compiler, that globally will reprogram our heads as we CHOOSE to or not.

So what comes first our programmer, or our program?

Also, i attribute dejavu (if i spelled that right :) ) to the one of the infinite input combinations accidentally triggering a responce that was the by product of other programming.

I'll explain: If you solve a solution for the XOR problem, and it works, great. Now place 1/2 and a 0 in, and see the output. This is the "by product" from the LEARNED conditions. You havent taught it how to respond to 1/2 and 0! But it still has an answer.

So as i see it, when i have dejavu, My programmer says, "this is new!", but at the same time my program says.. "Ya! I got and idea of what this is about... "

........ I

**think**NaN

Doing self-modifying code isn't all that bad, if you have a really good

reason. And if you only change the code seldom. And if you make

the code thread-safe :). Most of the time, you can design better,

and avoid SMC. In other cases, SMC can give major performance

boosts (crypto algos that are tailored to a specific key, etc).

reason. And if you only change the code seldom. And if you make

the code thread-safe :). Most of the time, you can design better,

and avoid SMC. In other cases, SMC can give major performance

boosts (crypto algos that are tailored to a specific key, etc).

ok, not to blow anyone's brain (mine is gone from this thread already)..

would this be a use for a neural net..

a robot that can detect and avoid obstacles?

a camera system that can recognize a fighter jet from seeing only parts of it and/or the whole thing?

seems like you would have to train it to see what each one is.. any chance for a self-learning robot? fire=bad, food=good

hafta excuse me, it's 3:35am

l8a

would this be a use for a neural net..

a robot that can detect and avoid obstacles?

a camera system that can recognize a fighter jet from seeing only parts of it and/or the whole thing?

seems like you would have to train it to see what each one is.. any chance for a self-learning robot? fire=bad, food=good

hafta excuse me, it's 3:35am

l8a

Yup! :)

Well in Theory.. with proper sensing devices. But yes. As well Fuzzy Logic is its "less complex" Brother. It can be just as powerfull in parametric situations.

Here is a book i stumbled upon just today, (while doing my Robotics research)... http://www.acroname.com/robotics/parts/R74-BOOK-9.html

As well check out a post in the Algorithms section.. there is a thread there that has some really good web links on NN's.

NaN

Well in Theory.. with proper sensing devices. But yes. As well Fuzzy Logic is its "less complex" Brother. It can be just as powerfull in parametric situations.

Here is a book i stumbled upon just today, (while doing my Robotics research)... http://www.acroname.com/robotics/parts/R74-BOOK-9.html

As well check out a post in the Algorithms section.. there is a thread there that has some really good web links on NN's.

NaN