YoVDO

Deep Learning - All Optimizers in One Video - SGD with Momentum, Adagrad, Adadelta, RMSprop, Adam Optimizers

Offered By: Krish Naik via YouTube

Tags

Deep Learning Courses Gradient Descent Courses Momentum Courses

Course Description

Overview

Explore a comprehensive video tutorial on deep learning optimizers, covering Gradient Descent, Stochastic Gradient Descent (SGD), SGD with Momentum, Adagrad, Adadelta, RMSprop, and Adam. Gain in-depth knowledge of each optimizer's principles, advantages, and applications in machine learning and neural networks. Learn how these optimization algorithms improve model training efficiency and performance through detailed explanations and practical insights.

Syllabus

Gradient Descent
SGD
SGD With Momentum
Adagrad
Adadelta And RMSprop
Adam Optimizer


Taught by

Krish Naik

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX