YoVDO

Distributed Training - Part I - Lecture 17

Offered By: MIT HAN Lab via YouTube

Tags

Distributed Training Courses Machine Learning Courses Parallel Computing Courses GPU Computing Courses Scalability Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore distributed training techniques in machine learning with this comprehensive lecture from MIT's 6.5940 course. Delve into the first part of distributed training, led by Professor Song Han, as part of the EfficientML.ai series. Learn about the fundamental concepts, challenges, and strategies for scaling machine learning models across multiple devices or nodes. Gain insights into parallel processing, data parallelism, and model parallelism techniques that enable training of large-scale models efficiently. Understand the importance of distributed training in modern AI applications and its impact on accelerating the development of complex neural networks. Access accompanying slides at efficientml.ai to enhance your learning experience and follow along with the lecture content.

Syllabus

EfficientML.ai Lecture 17: Distributed Training (Part I) (MIT 6.5940, Fall 2023, Zoom)


Taught by

MIT HAN Lab

Related Courses

Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)
Moscow Institute of Physics and Technology via Coursera
Practical Deep Learning For Coders
fast.ai via Independent
GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam
Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera
Getting Started with PyTorch
Coursera Project Network via Coursera