Deep Learning Theoretical Course (Course VII)July 19, 2020 2020-08-04 10:52
Deep Learning Theoretical Course (Course VII)
How does a neuron learn?
Before we get started on discussing how a neuron learns, let us first understand how a human learns to solve a simple mathematical problem.
Learning to solve a mathematical problem
Considering the equation given below, find the value of and such that the equation holds correct, i.e LHS = RHS?
A simple yet time-consuming approach to solving this is by randomly guessing the values of and until the equation is satisfied. Although this approach works, this is just random guessing and no learning is happening. This is similar to when we had randomly initialized the weights of the neuron in the chapter, ‘What are neurons?’, and we had luckily received the prediction ‘Obese’.
This process is also called Monte-Carlo simulation.
The second and a more appropriate approach to solving this problem is by optimizing the value of the variables progressively in the following manner:
- Randomly choose the value of and .
- Use the value of and in the give equation and calculate the error, i.e., the difference between the LHS and RHS.
- If the error is positive, decrease the value of and alternatively till LHS = RHS.
- If the error is negative, increase the value of and alternatively till LHS = RHS.
In this approach, we are learning to correct our values of and based on the error (the difference between LHS and RHS). In a realistic scenario, it may take some time for us to get to the solution but we can check along the way if the value of error is decreasing and converging to 0 or not.
We can also change our increment/decrement value of and based on the error. If the error is large, we can take huge steps while increasing/decreasing the value of our variables and if the error is small, we can take smaller steps while increasing/decreasing the value of our variables.
Taking a huge step means that we may decrease our error quickly but we may also overshoot and never reach to 0. Taking a small step means that we may reach to 0 gradually but our training process is very slow. Choosing the right amount of increment/decrement is therefore very necessary.
Which approach does a neuron follow for supervised learning?
The neuron uses the second approach for learning, i.e., variable optimization.
Just replace and with and and you’ll get a clear picture. Here, bias is 0.
Loss function and Gradient Descent
In standard practice, we use a loss function, to calculate the loss (error) in each training step of a neuron, i.e., the difference between the predicted value and the actual value present in the training dataset.
Then, we try to progressively minimize the loss of the neuron using an optimization algorithm for a finite number of steps. A popular way to optimize the neuron’s loss is by using gradient descent.
Here is how gradient descent works:
- Choose a random set of weights initially.
- Feed the input into the neuron along with the weights and bias and calculate the loss using the loss function .
- Update the weights assigned to each input using the gradient formula.
- Iterate step 2 and 3 till a finite number of steps .
At each iteration, the weights of the neuron are updated using the following formula,
where, represents the value of weights at iteration , represents the value of weights at iteration , represents the learning rate (usually very small in practice such as 0.001) and represents the gradient of the cost function .
Here, is the matrix of partial derivatives (gradients) of cost function and weights ,
Now, let us have a good look at and understand what gradient descent means for a neuron.
At the start of the algorithm, we choose a random set of weights. Then, the value of weights for the next iteration is calculated by decreasing the value of weights of the previous iteration with the gradient of the loss function.
The speed of decrease or descent is decided by . The greater the value of , the faster our descent and vice versa. We choose a finite step of iterations to run this entire process and by the end of the training run, we hope to have minimized the loss by finding the possible value of weights.
This is how a neuron learns to make accurate prediction based on the prediction possible. In the next