YoVDO

Optimal Neural Network Compressors and the Manifold Hypothesis

Offered By: Simons Institute via YouTube

Tags

Machine Learning Courses Trustworthy Machine Learning Courses

Course Description

Overview

Explore the intersection of information theory and machine learning in this 36-minute lecture by Aaron Wagner from Cornell University. Delve into the concept of optimal neural network compressors and their relationship to the manifold hypothesis. Gain insights into how these principles contribute to the development of trustworthy machine learning systems. Examine the theoretical foundations and practical implications of compressing neural networks while maintaining their performance and reliability.

Syllabus

Optimal Neural Network Compressors and the Manifold Hypothesis


Taught by

Simons Institute

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent