SpineNet - Learning Scale-Permuted Backbone for Recognition and Localization
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive video explanation of the SpineNet paper, which challenges traditional CNN architectures for object detection tasks. Learn about scale-permuted networks, neural architecture search, and how SpineNet improves upon ResNet-FPN models. Discover the innovative approach of using multiple rounds of re-scaling and long-range skip connections to enhance recognition and localization performance. Gain insights into up- and downsampling techniques, ablation studies, and potential future developments like attention routing for CNNs. Understand the significant improvements SpineNet achieves in object detection tasks and its transferability to classification tasks.
Syllabus
- Intro & Overview
- Problem Statement
- The Problem with Current Architectures
- Scale-Permuted Networks
- Neural Architecture Search
- Up- and Downsampling
- From ResNet to SpineNet
- Ablations
- My Idea: Attention Routing for CNNs
- More Experiments
- Conclusion & Comments
Taught by
Yannic Kilcher
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera MLOps for Scaling TinyML
Harvard University via edX Parameter Prediction for Unseen Deep Architectures - With First Author Boris Knyazev
Yannic Kilcher via YouTube Synthetic Petri Dish - A Novel Surrogate Model for Rapid Architecture Search
Yannic Kilcher via YouTube EfficientNetV2 - Smaller Models and Faster Training - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube