YoVDO

Gradient Descent, Stochastic Gradient Descent, and Acceleration - Part 2

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Gradient Descent Courses Machine Learning Courses Ordinary Differential Equations Courses Computational Mathematics Courses Stochastic Gradient Descent Courses

Course Description

Overview

Delve into the second part of a comprehensive tutorial on gradient-based optimization methods for large-scale machine learning problems. Explore gradient descent, stochastic gradient descent, and their accelerated versions as presented by Adam Oberman from McGill University. Gain insights into the fundamental concepts behind these algorithms, their convergence results, and their connections to ordinary differential equations. Learn how these first-order methods are essential for tackling high-dimensional optimization challenges where second-order approximations are impractical. This tutorial, part of the High Dimensional Hamilton-Jacobi PDEs Tutorials 2020 series, offers valuable knowledge for researchers and practitioners working on advanced machine learning optimization techniques.

Syllabus

Adam Oberman: "Gradient descent, Stochastic gradient descent, and acceleration" (Part 2/2)


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Introducción a la informática: codificación de la información
Universitat Jaume I via Independent
Introducción al desarrollo de videojuegos con Unity3D
Universitat Jaume I via Independent
Numerical Analysis
Vidyasagar University via Swayam
Computational Mathematics with SageMath
Institute of Chemical Technology (ICT) via Swayam
Computational Commutative Algebra
NPTEL via YouTube