Information Flow and Deep Representation Learning
Offered By: Open Data Science via YouTube
Course Description
Overview
Explore the fundamental role of representation learning in neural networks and its impact on advancing deep learning algorithms in this 45-minute conference talk. Delve into the information bottleneck analysis of deep learning algorithms, gaining insights into learning processes and patterns across layers of learned representations. Examine how this analysis provides practical perspectives on theoretical concepts in deep learning research, including nuisance insensitivity and disentanglement. Cover topics such as perception tasks, feature engineering, information plans, geometric clustering, and representation space, concluding with a comprehensive recap of the discussed concepts.
Syllabus
Introduction
Agenda
Perception tasks
Representation learning
Black boxes
Feature engineering
Information plan
Rafts
Bottom Line
Nuisance
Exceptions
Entanglement
Total Correlation
Geometric Clustering
Representation Space
Recap
Taught by
Open Data Science
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX