Talk 11/3/23 Gradient descent by Usufu Nyakoojo and Humu Mohammed

Abstract:

In training an artificial neural network, we need to find model parameters that optimize the cost function while ensuring sufficient learning rate of the machine. Gradient descent and in particular the stochastic gradient descent, comes in handy as an iterative optimization approach to address this need.

In this talk, we shall present how this can be achieved i.e. how the cost function can be optimized (minimized) through iterative selection of model parameters while maintaining a sufficient learning rate.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
Skip to toolbar