Accelerating Machine Learning Serving with Distributed Caches
Offered By: Data Science Festival via YouTube
Course Description
Overview
Explore innovative strategies for optimizing machine learning serving performance in this 30-minute technical talk by Iaroslav Geraskin from TikTok. Dive into the concept of leveraging distributed caches to accelerate ML serving, focusing on caching frequently accessed model predictions and intermediate computations. Gain practical insights into reducing latency and improving throughput in ML inference pipelines. Examine cache design considerations, implementation best practices, and the challenges associated with incorporating distributed caches into ML serving architectures. Equip yourself with the knowledge and tools necessary to harness the full potential of distributed caching for accelerated ML serving. Suitable for technical practitioners, this talk was presented as part of the Data Science Festival MayDay event 2024.
Syllabus
Accelerating Machine Learning Serving with Distributed Caches
Taught by
Data Science Festival
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent