YoVDO

Tightening Information-Theoretic Generalization Bounds with Data-Dependent Estimates with an Application to SGLD - Daniel Roy

Offered By: Institute for Advanced Study via YouTube

Tags

Machine Learning Courses Deep Learning Courses Information Theory Courses

Course Description

Overview

Explore a workshop presentation on tightening information-theoretic generalization bounds with data-dependent estimates, focusing on their application to Stochastic Gradient Langevin Dynamics (SGLD). Delve into the nature of generalization understanding, open problems, and barriers in the field. Examine non-vacuous bounds, stochastic gradient dynamics, and expected generalization error. Learn from Daniel Roy of the University of Toronto as he discusses these advanced concepts in deep learning theory, providing insights into current challenges and potential future directions in the field.

Syllabus

Intro
The nature of generalization understanding
Open problem
Barriers
Non vacuous bounds
Stochastic gradient dynamics
Expected generalization error
Plot
Conclusion


Taught by

Institute for Advanced Study

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX