On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore cutting-edge developments in gradient-based optimization for large-scale statistical data analysis in this lecture by Michael I. Jordan, a distinguished professor from UC Berkeley. Delve into three key areas: a novel framework for understanding Nesterov acceleration using continuous-time and Lagrangian perspectives, efficient methods for escaping saddle points in nonconvex optimization, and the acceleration of Langevin diffusion. Gain insights from Jordan's interdisciplinary approach bridging computational, statistical, cognitive, and biological sciences. Learn from a renowned expert who has received numerous accolades, including membership in the National Academy of Sciences and the ACM/AAAI Allen Newell Award.
Syllabus
Taskar Memorial Lecture 2018: M. Jordan (UC, Berkeley)
Taught by
Paul G. Allen School
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent