In this post, we will do the math on our dummy dataset and calculate the feedforward steps by hand. We will take the parameters of our first instance, i.e. first house, as the input vector and arbitrarily chosen random weights. Our dummy dataset was as follows:
Let’s recall how the feature vector and weights are multiplied to get the input to hidden layer which will be the values of our hidden nodes.
Let’s see what goes into the calculation for hidden neuron 1.
First we take the dot product of our input vector and their respective weights. The weights in the first column of our weight matrix correspond to the weights of the first layer. Our weight matrix has rows equal to the length of the feature vector and columns equal to the number of hidden nodes.
now, we substitute the values as seen in fig 2,
= 1500 * 0.10 + 2 * 0.20 + 2*0.30
Our sigmoid activation is defined as,
Applying sigmoid to this value we get:
An activation function decides whether to ‘fire’ the neuron or not. Here we have used sigmoid activation which gives a value between 0 and 1. Other activation functions can also be used, for example- ReLU.
Mathematically, sigmoid is defined as:
, as seen above.
Calculating for node ,
= 1500*0.15 + 2*0.25 + 2*0.35
= 225 + 0.5 + 0.7
Calculating for node ,
= 1500*0 + 2*0.50 + 2*0.80
= 0 + 1 + 1.6
Calculating for node ,
= 1500*0.20 + 2*0.75 + 2*0.5
= 300 + 1.5 + 1
Now we have the values for our four hidden neurons and we move on to calculate the input to the output neuron. We do not apply an activation on the output neuron of a regression task, so here will just do the dot product and sum it all up.
= 1*0.20 + 1*0.15 + 0.93*0.10 + 1*0.25
= 0.20 + 0.15 + 0.093 + 0.25
The prediction in the output layer is 0.693. Now, how is this possible? $0.693 for a house?? Yes, your doubts on my calculations are correct and now let’s see why this is. Let’s not lose sight of the fact that this not our final output, it’s rather our first iteration. Here, our predicted label and output label are compared, then we will use this difference in our back-propagation step to find partial derivatives and distribute this error.
After the back-propagation step, the network will learn the weights and gradually adjust to larger values to predict a better and more accurate number by the time we finish all our iterations.
In this example, we are only doing one hidden layer but we can also have more. The number of layers also affects the efficiency of our model so we need to choose this number carefully, neither too big nor too small. If we would have had a second hidden layer, the values of the hidden neurons in layer 1 would have been the input of the hidden neurons in layer 2 after dot product and activation and the calculation would be similar to that we did above.
– The data can also be scaled and standardized so there is not so much difference between feature values. There are several blogs about scaling data, I would advise you to read up on it.
– Other activations like ReLU can improve accuracy. ReLU activation is mostly preferred as it prevents the neurons from dying.
– To keep things simple, I haven’t used the bias neuron but it can also be used. It will help the model learn faster if the bias unit is initialized to a value in the thousands, say 50,000.
– I would also encourage you to try machine algorithms for this task and not just neural nets.
1- The most important resource for this blog for me was Kaggle discussions. I would like to mention Ryan Holbrook for helping me in the Learn section of Intro to Deep Learning Course Discussion.
2- Matt Mazur’s blog Step-by-step backpropagation example.
3- Hands-on ML by O’Reilly publications.
4- Andrew Ng’s description of topics. This is my go-to whenever I want to develop intuition for anything.
5- Connect with people on social media, join slack channels, attend study sessions. Having a community will help in ways you don’t even know yet.
And many more random youtube and google searches.
Coming next: Neural Networks – Backprop Math
Thank you for your time!!! All input is welcome, constructive criticism is deeply appreciated. Do let me know how you feel about the approach, the content and, well, basically everything. 🙂
One thought on “Neural Networks – Feedforward Math”
Your explanation is awesome. Anyone can understand it easily. The visuals you have provided you great. Please expand your visuals and details in the ‘Back Propagation’ part as you did in ‘Feed Forward’ part.