Resource
- https://www.linkedin.com/pulse/7-steps-becoming-deep-learning-expert-ankit-agarwal
- http://iamtrask.github.io/2015/07/12/basic-python-network/
- https://d396qusza40orc.cloudfront.net/neuralnets/lecture_slides/lec3.pdf
- http://hahack.com/reading/ann2/
- https://class.coursera.org/ml-005/lecture
- https://class.coursera.org/ntumltwo-001/lecture
Introduction
Neural networks are important for deep learning. A nueral network model comprises multiple layers of logistic regression models, as shown in the figure below.
In the figure above, denotes the weight from neuron i to neuron j in layer , the index i refers to a source node and j referes to a target node. denotes the number of neurons in layer .
Forward Propagation
For a given input vector ( as a bias ), which is sent to a neural network from input layer, the information will be sent and transformed layer by layer from input layer to output layer. In output layer, it yields a output variable in case of binary classification and regression (only one neuron in output layer), it yields multiple vectors in case of multiple classification (multiple neurons in output layer).
For each neuron, the process of information tranform comprises two steps shown as below:
- Generate activations
- Activate/transform, generally, use one of activate functions below:
Logistic Sigmod function
Hyperbolic Tangent function (tanh)
In case of regression, the transform in output layer uses identity function().
Back Propagation
Consider the Error function in output layer (L) with d(L) nodes (in our example, d(L)=1 ). And assume that there is M training examples.
- (omit subscript m)
We can see that e is a function of functions of ,…,, is a function of , thus, we can apply the chain rule to calculate the partial derivatives.
We use to refer to the error ratio on the neuron in layer l, (the output layer l = L)
Consider the Error function from output layer to hidden layer, we can find that e is a function of functions of , is a function of , is a function of , as shown in the figure below.
Reblogged this on Irene Li and commented:
Got stuck here but I know it is a good post 😀
LikeLike