Can Learning Theory Resist Deep Learning? - Francis Bach, INRIA
Offered By: Alan Turing Institute via YouTube
Course Description
Overview
Syllabus
Intro
Scientific context
Parametric supervised machine learning
Convex optimization problems
Exponentially convergent SGD for smooth finite sums
Exponentially convergent SGD for finite sums
Convex optimization for machine learning
Theoretical analysis of deep learning
Optimization for multi-layer neural networks
Gradient descent for a single hidden layer
Optimization on measures
Many particle limit and global convergence (Chizat and Bach, 2018a)
Simple simulations with neural networks
From qualitative to quantitative results ?
Lazy training (Chizat and Bach, 2018)
From lazy training to neural tangent kernel
Are state-of-the-art neural networks in the lazy regime?
Is the neural tangent kernel useful in practice?
Can learning theory resist deep learning?
Taught by
Alan Turing Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX