YoVDO

Neural Architecture Search Without Training - Paper Explained

Offered By: Yannic Kilcher via YouTube

Tags

Neural Architecture Search Courses Machine Learning Courses

Course Description

Overview

Explore a groundbreaking approach to Neural Architecture Search (NAS) that eliminates the need for time-consuming and resource-intensive training of numerous models. Learn how statistics of the Jacobian around data points can be used to estimate the performance of proposed architectures at initialization, significantly speeding up the NAS process. Dive into the concepts of linearization around datapoints and linearization statistics, and understand their application in the NAS-201 benchmark. Examine the experimental results that demonstrate the effectiveness of this novel method in finding powerful network architectures without any training, all within seconds on a single GPU.

Syllabus

- Intro & Overview
- Neural Architecture Search
- Controller-based NAS
- Architecture Search Without Training
- Linearization Around Datapoints
- Linearization Statistics
- NAS-201 Benchmark
- Experiments
- Conclusion & Comments


Taught by

Yannic Kilcher

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
MLOps for Scaling TinyML
Harvard University via edX
Parameter Prediction for Unseen Deep Architectures - With First Author Boris Knyazev
Yannic Kilcher via YouTube
SpineNet - Learning Scale-Permuted Backbone for Recognition and Localization
Yannic Kilcher via YouTube
Synthetic Petri Dish - A Novel Surrogate Model for Rapid Architecture Search
Yannic Kilcher via YouTube