Neural Architecture Search for Efficient Deep Learning - Lecture 9
Offered By: MIT HAN Lab via YouTube
Course Description
Overview
Explore the third part of a lecture series on Neural Architecture Search in this comprehensive video from MIT's 6.S965 course on TinyML and Efficient Deep Learning Computing. Delve into advanced techniques for deploying neural networks on resource-constrained devices such as mobile phones and IoT devices. Learn about efficient inference methods, including model compression, pruning, quantization, and neural architecture search. Discover strategies for efficient training, like gradient compression and on-device transfer learning. Gain insights into application-specific model optimization for videos, point clouds, and NLP. Understand the principles of efficient quantum machine learning. Get hands-on experience implementing deep learning applications on microcontrollers, mobile devices, and quantum machines through an open-ended design project focused on mobile AI. Taught by Professor Song Han, this lecture is part of a series that equips students with the knowledge to overcome challenges in deploying and training neural networks on resource-limited devices.
Syllabus
Lecture 09 - Neural Architecture Search (Part III) | MIT 6.S965
Taught by
MIT HAN Lab
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera MLOps for Scaling TinyML
Harvard University via edX Parameter Prediction for Unseen Deep Architectures - With First Author Boris Knyazev
Yannic Kilcher via YouTube SpineNet - Learning Scale-Permuted Backbone for Recognition and Localization
Yannic Kilcher via YouTube Synthetic Petri Dish - A Novel Surrogate Model for Rapid Architecture Search
Yannic Kilcher via YouTube