Neural Networks Meet Nonparametric Regression: Generalization by Weight Decay and Large Learning Rates
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the intersection of neural networks and nonparametric regression in this insightful lecture by Yu-Xiang Wang from UC San Diego. Delve into the reasons behind the superior performance of overparameterized deep learning models compared to classical methods like kernels and splines. Examine how standard hyperparameter tuning in deep neural networks (DNNs) implicitly uncovers hidden sparsity and low-dimensional structures, leading to improved adaptivity. Gain new perspectives on overparameterization, representation learning, and the generalization capabilities of neural networks through optimization-algorithm induced implicit biases such as Edge-of-Stability and Minima Stability. Analyze theory and examples that illustrate how DNNs achieve adaptive and near-optimal generalization, shedding light on their effectiveness in practical applications.
Syllabus
Neural Networks meet Nonparametric Regression: Generalization by Weight Decay and Large...
Taught by
Simons Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX