YoVDO

Distributed Caching for Generative AI: Optimizing LLM Data Pipeline on the Cloud

Offered By: The ASF via YouTube

Tags

Distributed Caching Courses Machine Learning Courses Cloud Computing Courses Distributed Systems Courses Alluxio Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the optimization of large language model (LLM) data pipelines on the cloud through distributed caching in this 32-minute talk by Fu Zhengjia, Alluxio Open Source Evangelist. Learn about the challenges of LLM training, including resource-intensive processes and frequent I/O operations with small files. Discover how Alluxio's distributed cache architecture system addresses these issues, improving GPU utilization and resource efficiency. Examine the synergy between Alluxio and Spark for high-performance data processing in AI scenarios. Delve into the design and implementation of distributed cache systems, best practices for optimizing cloud-based data pipelines, and real-world applications at Microsoft, Tencent, and Zhihu. Gain insights into creating modern data platforms and leveraging scalable infrastructure for LLM training and inference.

Syllabus

Distributed Caching For Generative AI: Optimizing The Llm Data Pipeline On The Cloud


Taught by

The ASF

Related Courses

4.0 Shades of Digitalisation for the Chemical and Process Industries
University of Padova via FutureLearn
A Day in the Life of a Data Engineer
Amazon Web Services via AWS Skill Builder
FinTech for Finance and Business Leaders
ACCA via edX
Accounting Data Analytics
University of Illinois at Urbana-Champaign via Coursera
Accounting Data Analytics
Coursera