YoVDO

Overfitting: Benign, Tempered and Harmful - Lecture on Machine Learning Regularization

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Overfitting Courses Machine Learning Courses Neural Networks Courses Regularization Courses High-dimensional Data Courses Generalization Courses Bias-Variance Tradeoff Courses Statistical Learning Theory Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 51-minute conference talk by Michael Murray from the University of Bath, presented at IPAM's Analyzing High-dimensional Traces of Intelligent Behavior Workshop. Delve into the nuanced understanding of overfitting in neural networks, challenging conventional wisdom about regularization and generalization. Examine the surprising phenomenon of models achieving near-zero loss on noisy training data while still performing well in testing. Discover the concepts of benign and tempered overfitting, and investigate how data properties such as regularity, signal strength, and the ratio of data points to dimensions influence different overfitting outcomes. Gain insights into the complex relationship between data characteristics and model performance in the context of a simple data model. Enhance your understanding of machine learning challenges and the factors driving transitions between various overfitting scenarios in high-dimensional data analysis.

Syllabus

Michael Murray - Overfitting: benign, tempered and harmful - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Statistical Machine Learning
Eberhard Karls University of Tübingen via YouTube
The Information Bottleneck Theory of Deep Neural Networks
Simons Institute via YouTube
Interpolation and Learning With Scale Dependent Kernels
MITCBMM via YouTube
Statistical Learning Theory and Applications - Class 16
MITCBMM via YouTube
Statistical Learning Theory and Applications - Class 6
MITCBMM via YouTube