On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore cutting-edge developments in gradient-based optimization for large-scale statistical data analysis in this lecture by Michael I. Jordan, a distinguished professor from UC Berkeley. Delve into three key areas: a novel framework for understanding Nesterov acceleration using continuous-time and Lagrangian perspectives, efficient methods for escaping saddle points in nonconvex optimization, and the acceleration of Langevin diffusion. Gain insights from Jordan's interdisciplinary approach bridging computational, statistical, cognitive, and biological sciences. Learn from a renowned expert who has received numerous accolades, including membership in the National Academy of Sciences and the ACM/AAAI Allen Newell Award.
Syllabus
Taskar Memorial Lecture 2018: M. Jordan (UC, Berkeley)
Taught by
Paul G. Allen School
Related Courses
On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and StochasticSimons Institute via YouTube Optimisation - An Introduction: Professor Coralia Cartis, University of Oxford
Alan Turing Institute via YouTube Optimization in Signal Processing and Machine Learning
IEEE Signal Processing Society via YouTube Methods for L_p-L_q Minimization in Image Restoration and Regression - SIAM-IS Seminar
Society for Industrial and Applied Mathematics via YouTube Certificates of Nonnegativity and Their Applications in Theoretical Computer Science
Society for Industrial and Applied Mathematics via YouTube