YoVDO

Deep Learning - All Optimizers in One Video - SGD with Momentum, Adagrad, Adadelta, RMSprop, Adam Optimizers

Offered By: Krish Naik via YouTube

Tags

Deep Learning Courses Gradient Descent Courses Momentum Courses

Course Description

Overview

Explore a comprehensive video tutorial on deep learning optimizers, covering Gradient Descent, Stochastic Gradient Descent (SGD), SGD with Momentum, Adagrad, Adadelta, RMSprop, and Adam. Gain in-depth knowledge of each optimizer's principles, advantages, and applications in machine learning and neural networks. Learn how these optimization algorithms improve model training efficiency and performance through detailed explanations and practical insights.

Syllabus

Gradient Descent
SGD
SGD With Momentum
Adagrad
Adadelta And RMSprop
Adam Optimizer


Taught by

Krish Naik

Related Courses

How Things Work: An Introduction to Physics
University of Virginia via Coursera
Physics 1 for Physical Science Majors
University of Colorado Boulder via Coursera
Introductory Physics I with Laboratory
Georgia Institute of Technology via Coursera
Mechanics ReView
Massachusetts Institute of Technology via edX
Mechanics: Motion, Forces, Energy and Gravity, from Particles to Planets
University of New South Wales via Coursera