YoVDO

Distributed Training and Gradient Compression - Part I - Lecture 13

Offered By: MIT HAN Lab via YouTube

Tags

Distributed Training Courses Neural Networks Courses Microcontrollers Courses TinyML Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the fundamentals of distributed training for neural networks in this lecture from MIT's course on TinyML and Efficient Deep Learning Computing. Delve into key concepts such as data parallelism and model parallelism, essential for scaling machine learning models across multiple devices. Learn how to overcome challenges in deploying neural networks on mobile and IoT devices, and discover techniques for accelerating training processes. Gain insights from instructor Song Han on efficient machine learning methods that enable powerful deep learning applications on resource-constrained devices. Access accompanying slides and course materials to enhance your understanding of distributed training strategies and their practical applications in mobile AI and IoT scenarios.

Syllabus

Lecture 13 - Distributed Training and Gradient Compression (Part I) | MIT 6.S965


Taught by

MIT HAN Lab

Related Courses

Custom and Distributed Training with TensorFlow
DeepLearning.AI via Coursera
Architecting Production-ready ML Models Using Google Cloud ML Engine
Pluralsight
Building End-to-end Machine Learning Workflows with Kubeflow
Pluralsight
Deploying PyTorch Models in Production: PyTorch Playbook
Pluralsight
Inside TensorFlow
TensorFlow via YouTube