YoVDO

Efficient Inference of Extremely Large Transformer Models

Offered By: Toronto Machine Learning Series (TMLS) via YouTube

Tags

Transformer Models Courses Machine Learning Courses Deep Learning Courses Model Optimization Courses Model Compression Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions for efficient inference of massive transformer-based language models in this 28-minute Toronto Machine Learning Series (TMLS) talk. Dive into the world of multi-billion-parameter models and learn how they are optimized for production environments. Discover key techniques for making these behemoth models faster, smaller, and more cost-effective, including model compression, efficient attention mechanisms, and optimal model parallelism strategies. Gain insights from Bharat Venkitesh, Senior Machine Learning Engineer at Cohere, as he discusses the establishment of the inference tech stack and the latest advancements in handling extremely large transformer models.

Syllabus

Efficient Inference of Extremely Large Transformer Models


Taught by

Toronto Machine Learning Series (TMLS)

Related Courses

TensorFlow Lite for Edge Devices - Tutorial
freeCodeCamp
Few-Shot Learning in Production
HuggingFace via YouTube
TinyML Talks Germany - Neural Network Framework Using Emerging Technologies for Screening Diabetic
tinyML via YouTube
TinyML for All: Full-stack Optimization for Diverse Edge AI Platforms
tinyML via YouTube
TinyML Talks - Software-Hardware Co-design for Tiny AI Systems
tinyML via YouTube