Efficient Transformers - Lecture 20
Offered By: MIT HAN Lab via YouTube
Course Description
Overview
Explore efficient transformers in this lecture from MIT's TinyML and Efficient Deep Learning Computing course. Dive into techniques for optimizing transformer models to run on resource-constrained devices like mobile phones and IoT hardware. Learn about model compression, pruning, quantization, neural architecture search, and knowledge distillation approaches to reduce the computational and memory requirements of transformer architectures. Discover how to apply these methods to enable powerful natural language processing capabilities on edge devices. Gain practical insights for deploying transformer-based AI applications in mobile and embedded systems. Access accompanying slides and resources to reinforce key concepts covered in the 1 hour 18 minute video lecture led by Professor Song Han of the MIT HAN Lab.
Syllabus
Lecture 20 - Efficient Transformers | MIT 6.S965
Taught by
MIT HAN Lab
Related Courses
Digital Signal ProcessingÉcole Polytechnique Fédérale de Lausanne via Coursera Principles of Communication Systems - I
Indian Institute of Technology Kanpur via Swayam Digital Signal Processing 2: Filtering
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 3: Analog vs Digital
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 4: Applications
École Polytechnique Fédérale de Lausanne via Coursera