#### A High Level Understanding of Backpropagation Learning

Backpropagation is a technique used to help a feedforward neural networks learn or improve the accuracy of it's output based on the expected output of the learning sample set. That is to say, we will have a set of sample inputs with their expected outputs and then use that set to make adjustments in for the weights of our neuron associations (again I like to call them axons). This approach effectively solves the problem of how to update hidden layers.#### The Math Basics for Backpropagation Learning

http://www.youtube.com/watch?v=nz3NYD73H6E&p=3EA65335EAC29EE8

*r*i is the error for the*n*th node of the*i*th layer. Furthermore,*r*is**only**for the output layer; it is simply the difference between the expected output and the output (see Neuron.CalcDelta() in example download).*v*i is the values calculated during the normal feedforward algorithms (see part 1).*Φ*' will be explained in equation 4.*δ*i is the calculated change for neuron*n*within the*i*th layer.**Critical:**This is much like equation 1 with some very distinct differences. You may notice that instead of using*r*, the error is calculated by the sum of the*j*th delta with the weighted association axons of the*n*th neuron with the*j*th layer.- This equation is the gradient or calculated change of weights. Or in other words, this is where we make the learned changes.
*η*is simply a learning rate between 0 and 1.*y*is the original input values calculated from the axon for the*n*th neuron during the feedforward calculation. - This is the inverse or derivative activation function for either sigmoid or hyperbolic tangent functions (depends on whichever activation function was used for the neurons).

*n*is referring to the

*n*th node in a series of node calculations from input to output. Also,

*i*and

*j*simply note the distinction between one layer and another.

<< back to part 1

Download Sample