Distributed Training and Gradient Compression - Part I - Lecture 13
Offered By: MIT HAN Lab via YouTube
Course Description
Overview
Explore the fundamentals of distributed training for neural networks in this lecture from MIT's course on TinyML and Efficient Deep Learning Computing. Delve into key concepts such as data parallelism and model parallelism, essential for scaling machine learning models across multiple devices. Learn how to overcome challenges in deploying neural networks on mobile and IoT devices, and discover techniques for accelerating training processes. Gain insights from instructor Song Han on efficient machine learning methods that enable powerful deep learning applications on resource-constrained devices. Access accompanying slides and course materials to enhance your understanding of distributed training strategies and their practical applications in mobile AI and IoT scenarios.
Syllabus
Lecture 13 - Distributed Training and Gradient Compression (Part I) | MIT 6.S965
Taught by
MIT HAN Lab
Related Courses
Comprendre les MicrocontroleursÉcole Polytechnique Fédérale de Lausanne via Coursera Electronic Interfaces: Bridging the Physical and Digital Worlds
University of California, Berkeley via edX Arduino y algunas aplicaciones
Universidad Nacional Autónoma de México via Coursera Embedded Systems Design
Indian Institute of Technology, Kharagpur via Swayam Enseignes et afficheurs à LED
École Polytechnique Fédérale de Lausanne via Coursera