Networks that Adapt to Intrinsic Dimensionality Beyond the Domain
Offered By: Inside Livermore Lab via YouTube
Course Description
Overview
Explore the intricacies of deep learning networks and their ability to adapt to intrinsic dimensionality in this seminar by Alexander Cloninger from UC San Diego. Delve into the central question of network size requirements for function approximation and how data dimensionality impacts learning. Examine ReLU networks' approximation capabilities for functions with dimensionality-reducing feature maps, focusing on projections onto low-dimensional submanifolds and distances to low-dimensional sets. Discover how deep nets remain faithful to an intrinsic dimension governed by the function rather than domain complexity. Investigate connections to two-sample testing, manifold autoencoders, and data generation. Learn about Dr. Cloninger's research in geometric data analysis and applied harmonic analysis, exploring applications in imaging, medicine, and artificial intelligence.
Syllabus
Introduction
Speaker Introduction
Overview
Neural Networks
The Curse of Dimensionality
Theory
Main Question
Manifold Learning Community
Reach of a Manifold
Linear Regression
Approximation Theory
Classification
Excess Risk
Recent Work
Chart Auto Encoders
Neural Network Construction
Linear Encoders
Clustered Data
Questions
Conclusion
Hybrid Seminar
Taught by
Inside Livermore Lab
Related Courses
From Reinforcement Learning to Spin Glasses - The Many Surprises in Quantum State PreparationAPS Physics via YouTube Mathematical Frameworks for Signal and Image Analysis - Diffusion Methods in Manifold and Fibre Bundle Learning
Joint Mathematics Meetings via YouTube Quantifying the Topology of Coma
Institute for Pure & Applied Mathematics (IPAM) via YouTube Reconstructing Manifolds by Weighted L_1-Norm Minimization
Applied Algebraic Topology Network via YouTube Demystifying Latschev's Theorem for Manifold Reconstruction
Applied Algebraic Topology Network via YouTube