YoVDO

Gradient Descent and Stochastic Gradient Descent

Offered By: Paul Hand via YouTube

Tags

Gradient Descent Courses Algorithms and Data Structures Courses Convex Functions Courses Optimization Algorithms Courses Deep Neural Networks Courses Stochastic Gradient Descent Courses

Course Description

Overview

Explore gradient descent and stochastic gradient descent in deep neural networks through this 57-minute lecture. Delve into the effects of varying learning rates, examining the consequences of rates that are too high or too low. Analyze convergence rates for both gradient descent and stochastic gradient descent in the context of convex functions. Access accompanying notes for a comprehensive understanding of the topic. Part of Northeastern University's CS 7150 Summer 2020 Deep Learning course, this lecture covers introduction, gradient descent convergence, recovery theorem, proof and interpretation, gradient descent challenges, stochastic gradient descent, step sizes and learning rates, and associated challenges.

Syllabus

Introduction
Gradient Descent Convergence
Recovery Theorem
Proof
Interpretation
Gradient Descent Challenges
Stochastic Gradient Descent
Step sizes and learning rates
Challenges
Learning Rates


Taught by

Paul Hand

Related Courses

Learn Explainable AI (XAI)
Codecademy
Basics of Machine Learning
RWTH Aachen University via edX
Introduction to RNN and DNN
Packt via Coursera
Image Classification on Autopilot with AWS AutoGluon
Coursera Project Network via Coursera
Object Tracking and Motion Detection with Computer Vision
MathWorks via Coursera