Enabling End-to-End LLMOps on Michelangelo with Ray
Offered By: Anyscale via YouTube
Course Description
Overview
Explore how Uber is leveraging Ray to extend its Michelangelo ML platform for end-to-end LLMOps in this 30-minute conference talk. Discover how Uber has established a scalable and interactive development environment capable of utilizing hundreds of A100 GPUs on Ray with great flexibility. Learn about the integration of various open-source techniques for LLM training, evaluation, and serving, which have significantly enhanced Uber's capability to efficiently develop custom models based on state-of-the-art LLMs like LLama2. Gain insights into how Uber is harnessing Generative AI driven by LLMs to improve user experience and employee productivity in the mobility and delivery sectors. Access the slide deck for a visual representation of the concepts discussed.
Syllabus
Enabling End-to-End LLMOps on Michelangelo with Ray
Taught by
Anyscale
Related Courses
LLaMA2 for Multilingual Fine TuningSam Witteveen via YouTube Set Up a Llama2 Endpoint for Your LLM App in OctoAI
Docker via YouTube AI Engineer Skills for Beginners: Code Generation Techniques
All About AI via YouTube Training and Evaluating LLaMA2 Models with Argo Workflows and Hera
CNCF [Cloud Native Computing Foundation] via YouTube LangChain Crash Course - 6 End-to-End LLM Projects with OpenAI, LLAMA2, and Gemini Pro
Krish Naik via YouTube