Building RAG-based LLM Applications for Production - LLMs III Talk
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the development and deployment of RAG-based LLM applications for production in this 30-minute talk by Philipp Moritz and Yifei Feng. Learn how to scale major workloads like data loading, preprocessing, embedding, and serving on a cluster. Discover techniques for evaluating different configurations and deploying applications effectively. Gain insights into Anyscale Endpoints, a cost-effective solution for serving popular open-source models. Benefit from the expertise of Philipp Moritz, co-creator of Ray and CTO of Anyscale, and Yifei Feng, who leads Infrastructure and SRE teams at Anyscale, as they share their knowledge on building scalable AI applications.
Syllabus
Building RAG-based LLM Applications for Production // Philipp Moritz & Yifei Feng // LLMs III Talk
Taught by
MLOps.community
Related Courses
Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and AnyscaleAnyscale via YouTube Scalable and Cost-Efficient AI Workloads with AWS and Anyscale
Anyscale via YouTube End-to-End LLM Workflows with Anyscale
Anyscale via YouTube Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube Deploying Many Models Efficiently with Ray Serve
Anyscale via YouTube