The Mathematical Universe Behind Deep Neural Networks - Rothschild Lecture
Offered By: Isaac Newton Institute for Mathematical Sciences via YouTube
Course Description
Overview
Embark on a captivating journey through the mathematical universe behind deep neural networks in this Rothschild Lecture delivered by Professor Helmut Bölcskei from ETH Zürich. Explore the theoretical underpinnings of deep neural networks, delving into functional analysis, harmonic analysis, complex analysis, approximation theory, dynamical systems, Kolmogorov complexity, optimal transport, and fractal geometry. Gain insights into the mathematical foundations that have led to breakthrough results in practical machine learning tasks such as image classification, image captioning, control-policy-learning for the board game Go, and protein structure prediction. Discover how these mathematical concepts contribute to the remarkable successes of deep neural networks in various applications.
Syllabus
Date: 10 December 2021 – 16:00 to
Taught by
Isaac Newton Institute for Mathematical Sciences
Related Courses
Optimal Transport and PDE - Gradient Flows in the Wasserstein MetricSimons Institute via YouTube Crash Course on Optimal Transport
Simons Institute via YouTube Learning From Ranks, Learning to Rank - Jean-Philippe Vert, Google Brain
Alan Turing Institute via YouTube Optimal Transport for Machine Learning - Gabriel Peyre, Ecole Normale Superieure
Alan Turing Institute via YouTube Regularization for Optimal Transport and Dynamic Time Warping Distances - Marco Cuturi
Alan Turing Institute via YouTube