Information Flow and Deep Representation Learning
Offered By: Open Data Science via YouTube
Course Description
Overview
Explore the fundamental role of representation learning in neural networks and its impact on advancing deep learning algorithms in this 45-minute conference talk. Delve into the information bottleneck analysis of deep learning algorithms, gaining insights into learning processes and patterns across layers of learned representations. Examine how this analysis provides practical perspectives on theoretical concepts in deep learning research, including nuisance insensitivity and disentanglement. Cover topics such as perception tasks, feature engineering, information plans, geometric clustering, and representation space, concluding with a comprehensive recap of the discussed concepts.
Syllabus
Introduction
Agenda
Perception tasks
Representation learning
Black boxes
Feature engineering
Information plan
Rafts
Bottom Line
Nuisance
Exceptions
Entanglement
Total Correlation
Geometric Clustering
Representation Space
Recap
Taught by
Open Data Science
Related Courses
From Graph to Knowledge Graph – Algorithms and ApplicationsMicrosoft via edX Social Network Analysis
Indraprastha Institute of Information Technology Delhi via Swayam Stanford Seminar - Representation Learning for Autonomous Robots, Anima Anandkumar
Stanford University via YouTube Unsupervised Brain Models - How Does Deep Learning Inform Neuroscience?
Yannic Kilcher via YouTube Emerging Properties in Self-Supervised Vision Transformers - Facebook AI Research Explained
Yannic Kilcher via YouTube