Accelerating Machine Learning Serving with Distributed Caches
Offered By: Data Science Festival via YouTube
Course Description
Overview
Explore innovative strategies for optimizing machine learning serving performance in this 30-minute technical talk by Iaroslav Geraskin from TikTok. Dive into the concept of leveraging distributed caches to accelerate ML serving, focusing on caching frequently accessed model predictions and intermediate computations. Gain practical insights into reducing latency and improving throughput in ML inference pipelines. Examine cache design considerations, implementation best practices, and the challenges associated with incorporating distributed caches into ML serving architectures. Equip yourself with the knowledge and tools necessary to harness the full potential of distributed caching for accelerated ML serving. Suitable for technical practitioners, this talk was presented as part of the Data Science Festival MayDay event 2024.
Syllabus
Accelerating Machine Learning Serving with Distributed Caches
Taught by
Data Science Festival
Related Courses
Project Setup and Practice - ASP.NET Core, C#, Redis, Distributed CachingRaw Coding via YouTube Caching Strategies and Theory for ASP.NET Core - Distributed Caching with Redis
Raw Coding via YouTube Where is My Cache? Architectural Patterns for Caching Microservices by Example
Devoxx via YouTube Fast Reliable Swift Builds with Buck
Devoxx via YouTube Elegant Builds at Scale with Gradle 3.0
Devoxx via YouTube