If you are slightly familiar with deep learning, you might've heard of the term Gradient Descent. If you have not heard of it, you should know it.
So let's understand using simple analogies, what gradient descent is.
What is Gradient Descent?
Gradient Descent is an optimization algorithm that is used to reduce cost function. Whoa, slow down there Sigmotron.
Don't throw such heavy terms.
Let's first briefly understand what the Optimisation algorithm and Cost function mean.
Optimization simply means choosing the best option out of the available alternatives. It's like choosing the best option between your mom's delicious food or the restaurant's tempting one.
The algorithm that helps to choose the best option is the Optimisation Algorithm. Gradient Descent is a type of Optimisation algorithm.
Cost Function
Okay, rewind back to your school days. You just finished your exam paper and your friend approaches you saying 'Yaar kya bakwaas paper tha.'
Now you and your friend start calculating how much each might score before that one person comes and rips the paper apart saying 'Bas ho gaya.' (Unless you are that person).
So you predict that your score in the dreaded mathematics will be around 60 out of 100.
But miracles do happen and you score a solid 65!!
So here the cost function is 5. Simply put it is the difference between the actual value and the assumed value.
Cost Function = Actual Value - Assumed Value
In deep learning models, we need to have the lowest cost function as a model that cannot accurately predict is not very appealing. For this, we use 'Gradient Descent'.
So everything is cool till here? Alright. In the next part, we will see
Comments