How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore Geoffrey Hinton's GLOM model for computer vision in this comprehensive video explanation. Dive into the innovative approach that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders, and RNNs to dynamically construct parse trees for object recognition. Learn about the multi-step consensus algorithm, cross-column attention mechanism, and how GLOM handles video input. Discover the potential of this new AI approach for visual scene understanding, including discussions on architecture, training methods, and design decisions.
Syllabus
- Intro & Overview
- Object Recognition as Parse Trees
- Capsule Networks
- GLOM Architecture Overview
- Top-Down and Bottom-Up communication
- Emergence of Islands
- Cross-Column Attention Mechanism
- My Improvements for the Attention Mechanism
- Some Design Decisions
- Training GLOM as a Denoising Autoencoder & Contrastive Learning
- Coordinate Transformations & Representing Uncertainty
- How GLOM handles Video
- Conclusion & Comments
Taught by
Yannic Kilcher
Related Courses
Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and MusicStanford University via YouTube OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube Robust Pre-Training by Adversarial Contrastive Learning - CAP6412 Spring 2021
University of Central Florida via YouTube