Neural Networks Meet Nonparametric Regression: Generalization by Weight Decay and Large Learning Rates
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the intersection of neural networks and nonparametric regression in this insightful lecture by Yu-Xiang Wang from UC San Diego. Delve into the reasons behind the superior performance of overparameterized deep learning models compared to classical methods like kernels and splines. Examine how standard hyperparameter tuning in deep neural networks (DNNs) implicitly uncovers hidden sparsity and low-dimensional structures, leading to improved adaptivity. Gain new perspectives on overparameterization, representation learning, and the generalization capabilities of neural networks through optimization-algorithm induced implicit biases such as Edge-of-Stability and Minima Stability. Analyze theory and examples that illustrate how DNNs achieve adaptive and near-optimal generalization, shedding light on their effectiveness in practical applications.
Syllabus
Neural Networks meet Nonparametric Regression: Generalization by Weight Decay and Large...
Taught by
Simons Institute
Related Courses
機器學習技法 (Machine Learning Techniques)National Taiwan University via Coursera Utilisez des modèles supervisés non linéaires
CentraleSupélec via OpenClassrooms Statistical Machine Learning
Eberhard Karls University of Tübingen via YouTube Interplay of Linear Algebra, Machine Learning, and HPC - JuliaCon 2021 Keynote
The Julia Programming Language via YouTube Interpolation and Learning With Scale Dependent Kernels
MITCBMM via YouTube