YoVDO

Distributed Training for Efficient Machine Learning - Part II - Lecture 18

Offered By: MIT HAN Lab via YouTube

Tags

Distributed Training Courses Machine Learning Courses Pipelining Courses Parallel Computing Courses GPU Computing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into the second part of distributed training in this 55-minute lecture from MIT's 6.5940 course on Efficient Machine Learning. Led by Professor Song Han, explore advanced concepts and techniques for scaling machine learning models across multiple devices. Gain insights into parallel processing strategies, communication protocols, and optimization methods that enable training large-scale models efficiently. Access accompanying slides at efficientml.ai to enhance your understanding of distributed training architectures and their implementation in real-world scenarios.

Syllabus

EfficientML.ai Lecture 18: Distributed Training (Part II) (MIT 6.5940, Fall 2023, Zoom)


Taught by

MIT HAN Lab

Related Courses

Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)
Moscow Institute of Physics and Technology via Coursera
Practical Deep Learning For Coders
fast.ai via Independent
GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam
Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera
Getting Started with PyTorch
Coursera Project Network via Coursera