YoVDO

The Elusive Generalization and Easy Optimization - Part 2

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Machine Learning Courses Data Science Courses Neural Networks Courses Gradient Descent Courses Overfitting Courses Generalization Courses Statistical Learning Theory Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the second part of a comprehensive lecture on generalization and optimization in machine learning, presented by Misha Belkin from the University of California, San Diego. Delve into the evolving understanding of generalization in machine learning, focusing on recent developments driven by empirical findings in neural networks. Examine how these discoveries have necessitated a reevaluation of theoretical foundations. Gain insights into the optimization process using gradient descent and discover why large non-convex systems are surprisingly easy to optimize through local methods. Enhance your knowledge of key concepts in data science and artificial intelligence through this in-depth presentation, part of IPAM's Mathematics of Intelligences Tutorials at UCLA.

Syllabus

Misha Belkin - The elusive generalization and easy optimization, Pt. 2 of 2 - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX