Deep Learning for Scientific Computing - Two Stories on the Gap Between Theory & Practice - Ben Adcock
Offered By: Alan Turing Institute via YouTube
Course Description
Overview
Syllabus
Intro
Main collaborators
Deep Learning (DL) for scientific computing
This talk two stories on the theory-practice gap
Parametric modelling
Challenges
MLFA: examining the practical performance of DNNS
Limited performance for smooth, univariate approximation
Balancing architecture size
Smooth, multivariate functions
Piecewise smooth function approximation
Theoretical insights
DNN existence theory for holomorphic functions
Practical DNN existence theorem: Hilbert-valued case
Discussion
Deep learning for inverse problems
Further examples
These are not rare events
Unpredictable generalization
The universal instability theorem
Hallucinations in practice
Construction: unravelling and restarts
FIRENETS example
Conclusions
Taught by
Alan Turing Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX