Distributed Training for Efficient Machine Learning - Part II - Lecture 18
Offered By: MIT HAN Lab via YouTube
Course Description
Overview
Dive into the second part of distributed training in this 55-minute lecture from MIT's 6.5940 course on Efficient Machine Learning. Led by Professor Song Han, explore advanced concepts and techniques for scaling machine learning models across multiple devices. Gain insights into parallel processing strategies, communication protocols, and optimization methods that enable training large-scale models efficiently. Access accompanying slides at efficientml.ai to enhance your understanding of distributed training architectures and their implementation in real-world scenarios.
Syllabus
EfficientML.ai Lecture 18: Distributed Training (Part II) (MIT 6.5940, Fall 2023, Zoom)
Taught by
MIT HAN Lab
Related Courses
Intro to Parallel ProgrammingNvidia via Udacity Introduction to Linear Models and Matrix Algebra
Harvard University via edX Введение в параллельное программирование с использованием OpenMP и MPI
Tomsk State University via Coursera Supercomputing
Partnership for Advanced Computing in Europe via FutureLearn Fundamentals of Parallelism on Intel Architecture
Intel via Coursera