YoVDO

On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and Stochastic

Offered By: Simons Institute via YouTube

Tags

Machine Learning Courses Differential Geometry Courses Variational Methods Courses Nonconvex Optimization Courses

Course Description

Overview

Explore gradient-based optimization techniques in this lecture by Michael Jordan from UC Berkeley. Delve into accelerated, distributed, asynchronous, and stochastic methods for machine learning optimization. Learn about variational approaches, covariant operators, discretization, gradient flow, Hamiltonian formulation, and gradient descent structures. Discover strategies for avoiding saddle points, understand the role of differential geometry in nonconvex optimization, and gain insights into stochastic gradient control. Enhance your understanding of computational challenges in machine learning through this comprehensive exploration of advanced optimization concepts.

Syllabus

Intro
What is variational
Gradientbased optimization
Covariant operator
Discretization
Summary
Gradient Flow
Hamiltonian Formulation
Gradient Descent
Diffusions
Assumptions
Gradient Descent Structure
Avoiding Saddle Points
Differential geometry
Nonconvex optimization
Stochastic gradient control


Taught by

Simons Institute

Related Courses

Optimisation - An Introduction: Professor Coralia Cartis, University of Oxford
Alan Turing Institute via YouTube
Optimization in Signal Processing and Machine Learning
IEEE Signal Processing Society via YouTube
Methods for L_p-L_q Minimization in Image Restoration and Regression - SIAM-IS Seminar
Society for Industrial and Applied Mathematics via YouTube
Certificates of Nonnegativity and Their Applications in Theoretical Computer Science
Society for Industrial and Applied Mathematics via YouTube
Robust Regression by Purushottam Kar
International Centre for Theoretical Sciences via YouTube