YoVDO

Non-negative Gauss-Newton Methods for Empirical Risk Minimization

Offered By: Paul G. Allen School via YouTube

Tags

Machine Learning Courses Deep Learning Courses Reinforcement Learning Courses Stochastic Optimization Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a distinguished seminar on optimization and data featuring Lin Xiao from Facebook AI Research. Delve into non-negative Gauss-Newton methods for empirical risk minimization, focusing on minimizing the average of numerous smooth but potentially non-convex functions. Learn how reformulating non-negative loss functions allows for the application of Gauss-Newton or Levenberg-Marquardt methods, resulting in highly adaptive algorithms. Discover the convergence analysis of these methods in convex, non-convex, and stochastic settings, comparing their performance to classical gradient methods. Gain insights from Lin Xiao's extensive experience in optimization theory and algorithms for deep learning and reinforcement learning, drawing from his work at Meta's Fundamental AI Research team and previous roles at Microsoft Research and top academic institutions.

Syllabus

Distinguished Seminar in Optimization and Data: Lin Xiao (Facebook AI Research)


Taught by

Paul G. Allen School

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX