YoVDO

Nonlinear Approximation by Deep ReLU Networks - Ron DeVore, Texas A&M

Offered By: Alan Turing Institute via YouTube

Tags

Deep Neural Networks Courses Data Science Courses Neural Network Architecture Courses

Course Description

Overview

Explore the mathematics behind deep learning in this 47-minute conference talk on nonlinear approximation using deep ReLU networks. Delve into the architecture of neural networks, focusing on ReLU activation functions and their role in approximation theory. Examine the structure of TW.L, compare it with other approaches, and analyze approximation errors and classes. Investigate more general constructions and their consequences, including extremes and manifold approximation. Learn about three key theorems and covering techniques. Gain insights into cutting-edge advances in data science, bridging gaps between computational statistics, machine learning, optimization, information theory, and learning theory.

Syllabus

Intro
Deep Neural Networks
ReLU Networks
Architecture of Neural Networks
Structure of TW.L
Comparing T, with
Approximation Error
Approximation Classes
More general construction
Consequences
Extremes
Let us be careful
Manifold Approximation
Three Theorems
Covering
Last Thoughts


Taught by

Alan Turing Institute

Related Courses

Sequences, Time Series and Prediction
DeepLearning.AI via Coursera
A Beginners Guide to Data Science
Udemy
Artificial Neural Networks(ANN) Made Easy
Udemy
Makine Mühendisleri için Derin Öğrenme
Udemy
Customer Analytics in Python
Udemy