On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and Stochastic
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore gradient-based optimization techniques in this lecture by Michael Jordan from UC Berkeley. Delve into accelerated, distributed, asynchronous, and stochastic methods for machine learning optimization. Learn about variational approaches, covariant operators, discretization, gradient flow, Hamiltonian formulation, and gradient descent structures. Discover strategies for avoiding saddle points, understand the role of differential geometry in nonconvex optimization, and gain insights into stochastic gradient control. Enhance your understanding of computational challenges in machine learning through this comprehensive exploration of advanced optimization concepts.
Syllabus
Intro
What is variational
Gradientbased optimization
Covariant operator
Discretization
Summary
Gradient Flow
Hamiltonian Formulation
Gradient Descent
Diffusions
Assumptions
Gradient Descent Structure
Avoiding Saddle Points
Differential geometry
Nonconvex optimization
Stochastic gradient control
Taught by
Simons Institute
Related Courses
Nonlinear Dynamics 1: Geometry of ChaosGeorgia Institute of Technology via Independent Geometría diferencial y Mecánica: una introducción
Universidad de La Laguna via Miríadax Geometría diferencial y Mecánica: una introducción
Universidad de La Laguna via Miríadax Differential Geometry
Math at Andrews via YouTube Curvature for the General Paraboloid - Differential Geometry - NJ Wildberger
Insights into Mathematics via YouTube