Better Understanding of Non-convex Methods in Machine Learning
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore the frontiers of non-convex optimization in machine learning through this illuminating lecture by Princeton University's Tengyu Ma. Delve into recent breakthroughs in deep learning and the challenges of analyzing complex, high-dimensional models trained on massive datasets. Discover new algorithmic approaches and analysis tools for non-convex methods, including insights on matrix completion solved by stochastic gradient descent. Examine the landscape of objective functions for linearized recurrent neural networks and residual networks, and learn how over-parameterization and re-parameterization can simplify optimization processes. Gain valuable knowledge about the formal study of non-convex methods and their applications in advancing machine learning techniques.
Syllabus
Allen School Colloquia: Tengyu Ma (Princeton University)
Taught by
Paul G. Allen School
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent