Parameter Prediction for Unseen Deep Architectures - With First Author Boris Knyazev
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a groundbreaking approach to deep learning in this 48-minute video featuring first author Boris Knyazev. Dive into the concept of parameter prediction for unseen deep architectures, where a Graph-Hypernetwork is trained to predict high-performing weights for novel network architectures without traditional training. Learn about the DeepNets-1M dataset, the training process for the Hypernetwork, and the use of Graph Neural Networks. Discover how message passing mirrors forward and backward propagation, techniques for handling different output shapes, and the implementation of differentiable normalization and virtual residual edges. Examine experimental results, fine-tuning experiments, and the paper's public reception. Gain insights into a potentially more computationally efficient paradigm of training neural networks and its implications for the future of deep learning.
Syllabus
- Intro & Overview
- DeepNets-1M Dataset
- How to train the Hypernetwork
- Recap on Graph Neural Networks
- Message Passing mirrors forward and backward propagation
- How to deal with different output shapes
- Differentiable Normalization
- Virtual Residual Edges
- Meta-Batching
- Experimental Results
- Fine-Tuning experiments
- Public reception of the paper
- At , Boris mentions that they train the first variant, yet on closer examination, we decided it's more like the second
Taught by
Yannic Kilcher
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera MLOps for Scaling TinyML
Harvard University via edX SpineNet - Learning Scale-Permuted Backbone for Recognition and Localization
Yannic Kilcher via YouTube Synthetic Petri Dish - A Novel Surrogate Model for Rapid Architecture Search
Yannic Kilcher via YouTube EfficientNetV2 - Smaller Models and Faster Training - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube