YoVDO

Efficient Deep Learning Computing: From TinyML to Large Language Models

Offered By: MIT HAN Lab via YouTube

Tags

Computer Vision Courses Quantization Courses TinyML Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore Ji Lin's groundbreaking PhD research on efficient deep learning computing in this 56-minute defense presentation from MIT. Dive into the world of TinyML and large language models as Lin discusses his pioneering work, including MCUNet series for on-device training, AMC, TSM, and quantization techniques like SmoothQuant and AWQ. Learn how these innovations have been widely adopted by industry leaders such as NVIDIA, Intel, and Hugging Face. Discover the impact of Lin's research, which has garnered over 8,500 citations and 8,000 GitHub stars, and has been featured in prominent tech publications. Gain insights into the future of efficient ML computing from an NVIDIA Graduate Fellowship Finalist and Qualcomm Innovation Fellowship recipient who has made significant contributions to the field of deep learning efficiency.

Syllabus

Ji Lin's PhD Defense, Efficient Deep Learning Computing: From TinyML to Large Language Model. @MIT


Taught by

MIT HAN Lab

Related Courses

Digital Signal Processing
École Polytechnique Fédérale de Lausanne via Coursera
Principles of Communication Systems - I
Indian Institute of Technology Kanpur via Swayam
Digital Signal Processing 2: Filtering
École Polytechnique Fédérale de Lausanne via Coursera
Digital Signal Processing 3: Analog vs Digital
École Polytechnique Fédérale de Lausanne via Coursera
Digital Signal Processing 4: Applications
École Polytechnique Fédérale de Lausanne via Coursera