YoVDO

Distributed Training for Efficient Machine Learning - Part I - Lecture 17

Offered By: MIT HAN Lab via YouTube

Tags

Distributed Training Courses Machine Learning Courses Parallel Computing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore distributed training techniques in machine learning with this comprehensive lecture from MIT's 6.5940 course. Delve into the first part of distributed training, led by Prof. Song Han, as part of the EfficientML.ai series. Learn about the fundamental concepts, challenges, and strategies for scaling machine learning models across multiple devices or nodes. Gain insights into parallel processing, data parallelism, and model parallelism techniques used to accelerate training of large-scale neural networks. Discover how distributed training can significantly reduce computation time and enable the development of more complex models. Access accompanying slides at efficientml.ai to enhance your understanding of this critical topic in efficient machine learning.

Syllabus

EfficientML.ai Lecture 17: Distributed Training (Part I) (MIT 6.5940, Fall 2023)


Taught by

MIT HAN Lab

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent