Approximation Theory of Deep Learning from the Dynamical Viewpoint
Offered By: Fields Institute via YouTube
Course Description
Overview
Syllabus
Intro
Deep Learning: Theory vs Practice
Composition is Dynamics
Supervised Learning
The Problem of Approximation
Example: Approximation by Trigonometric Polynomials
The Continuum Idealization of Residual Networks
How do dynamics approximate functions?
Universal Approximation by Dynamics
Approximation of Symmetric Functions by Dynamical Hypothesis Spac
Sequence Modelling Applications
DL Architectures for Sequence Modelling
Modelling Static vs Dynamic Relationships
An Approximation Theory for Sequence Modelling
The Recurrent Neural Network Hypothesis Space
The Linear RNN Hypothesis Space
Properties of Linear RNN Hypothesis Space
Approximation Guarantee (Density)
Smoothness and Memory
Insights on the (Linear) RNN Hypothesis Space
Convolutional Architectures
Encoder-Decoder Architectures
Extending the RNN Analysis
Taught by
Fields Institute
Related Courses
Introduction to Dynamical Systems and ChaosSanta Fe Institute via Complexity Explorer Nonlinear Dynamics 1: Geometry of Chaos
Georgia Institute of Technology via Independent Linear Differential Equations
Boston University via edX Algorithmic Information Dynamics: From Networks to Cells
Santa Fe Institute via Complexity Explorer Nonlinear Differential Equations: Order and Chaos
Boston University via edX