Distributed Caching for Generative AI: Optimizing LLM Data Pipeline on the Cloud
Offered By: The ASF via YouTube
Course Description
Overview
Explore the optimization of large language model (LLM) data pipelines on the cloud through distributed caching in this 32-minute talk by Fu Zhengjia, Alluxio Open Source Evangelist. Learn about the challenges of LLM training, including resource-intensive processes and frequent I/O operations with small files. Discover how Alluxio's distributed cache architecture system addresses these issues, improving GPU utilization and resource efficiency. Examine the synergy between Alluxio and Spark for high-performance data processing in AI scenarios. Delve into the design and implementation of distributed cache systems, best practices for optimizing cloud-based data pipelines, and real-world applications at Microsoft, Tencent, and Zhihu. Gain insights into creating modern data platforms and leveraging scalable infrastructure for LLM training and inference.
Syllabus
Distributed Caching For Generative AI: Optimizing The Llm Data Pipeline On The Cloud
Taught by
The ASF
Related Courses
Software as a ServiceUniversity of California, Berkeley via Coursera Software Defined Networking
Georgia Institute of Technology via Coursera Pattern-Oriented Software Architectures: Programming Mobile Services for Android Handheld Systems
Vanderbilt University via Coursera Web-Technologien
openHPI Données et services numériques, dans le nuage et ailleurs
Certificat informatique et internet via France Université Numerique