How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore Geoffrey Hinton's GLOM model for computer vision in this comprehensive video explanation. Dive into the innovative approach that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders, and RNNs to dynamically construct parse trees for object recognition. Learn about the multi-step consensus algorithm, cross-column attention mechanism, and how GLOM handles video input. Discover the potential of this new AI approach for visual scene understanding, including discussions on architecture, training methods, and design decisions.
Syllabus
- Intro & Overview
- Object Recognition as Parse Trees
- Capsule Networks
- GLOM Architecture Overview
- Top-Down and Bottom-Up communication
- Emergence of Islands
- Cross-Column Attention Mechanism
- My Improvements for the Attention Mechanism
- Some Design Decisions
- Training GLOM as a Denoising Autoencoder & Contrastive Learning
- Coordinate Transformations & Representing Uncertainty
- How GLOM handles Video
- Conclusion & Comments
Taught by
Yannic Kilcher
Related Courses
Visual Question Answering: Grounded Systems and Transformer CapsulesUniversity of Central Florida via YouTube CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing
University of Central Florida via YouTube Subspace Capsule Network
University of Central Florida via YouTube Capsule Networks for Computer Vision – CVPR 2019 Tutorial
University of Central Florida via YouTube Capsule Networks - A Survey by Dr. Yogesh Rawat, UCF
University of Central Florida via YouTube