YoVDO

Overfitting: Benign, Tempered and Harmful - Lecture on Machine Learning Regularization

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Overfitting Courses Machine Learning Courses Neural Networks Courses Regularization Courses High-dimensional Data Courses Generalization Courses Bias-Variance Tradeoff Courses Statistical Learning Theory Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 51-minute conference talk by Michael Murray from the University of Bath, presented at IPAM's Analyzing High-dimensional Traces of Intelligent Behavior Workshop. Delve into the nuanced understanding of overfitting in neural networks, challenging conventional wisdom about regularization and generalization. Examine the surprising phenomenon of models achieving near-zero loss on noisy training data while still performing well in testing. Discover the concepts of benign and tempered overfitting, and investigate how data properties such as regularity, signal strength, and the ratio of data points to dimensions influence different overfitting outcomes. Gain insights into the complex relationship between data characteristics and model performance in the context of a simple data model. Enhance your understanding of machine learning challenges and the factors driving transitions between various overfitting scenarios in high-dimensional data analysis.

Syllabus

Michael Murray - Overfitting: benign, tempered and harmful - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX