YoVDO

Optimal Neural Network Compressors and the Manifold Hypothesis

Offered By: Simons Institute via YouTube

Tags

Machine Learning Courses Trustworthy Machine Learning Courses

Course Description

Overview

Explore the intersection of information theory and machine learning in this 36-minute lecture by Aaron Wagner from Cornell University. Delve into the concept of optimal neural network compressors and their relationship to the manifold hypothesis. Gain insights into how these principles contribute to the development of trustworthy machine learning systems. Examine the theoretical foundations and practical implications of compressing neural networks while maintaining their performance and reliability.

Syllabus

Optimal Neural Network Compressors and the Manifold Hypothesis


Taught by

Simons Institute

Related Courses

Information-Theoretic Foundations of Generative Adversarial Models
Simons Institute via YouTube
Generalization Bounds for Neural Network Based Decoders
Simons Institute via YouTube
Improving Accuracy-Privacy Tradeoff via Model Reprogramming
Simons Institute via YouTube
Fundamental Trade-Offs in FL-FA - Sparsity - DP - Communication Constraints
Simons Institute via YouTube
Contraction of Markov Kernels and Differential Privacy - Part I
Simons Institute via YouTube