Enabling End-to-End LLMOps on Michelangelo with Ray
Offered By: Anyscale via YouTube
Course Description
Overview
Explore how Uber is leveraging Ray to extend its Michelangelo ML platform for end-to-end LLMOps in this 30-minute conference talk. Discover how Uber has established a scalable and interactive development environment capable of utilizing hundreds of A100 GPUs on Ray with great flexibility. Learn about the integration of various open-source techniques for LLM training, evaluation, and serving, which have significantly enhanced Uber's capability to efficiently develop custom models based on state-of-the-art LLMs like LLama2. Gain insights into how Uber is harnessing Generative AI driven by LLMs to improve user experience and employee productivity in the mobility and delivery sectors. Access the slide deck for a visual representation of the concepts discussed.
Syllabus
Enabling End-to-End LLMOps on Michelangelo with Ray
Taught by
Anyscale
Related Courses
Large Language Models: Application through ProductionDatabricks via edX LLMOps - LLM Bootcamp
The Full Stack via YouTube MLOps: Why DevOps Solutions Fall Short in the Machine Learning World
Linux Foundation via YouTube Quick Wins Across the Enterprise with Responsible AI
Microsoft via YouTube End-to-End AI App Development: Prompt Engineering to LLMOps
Microsoft via YouTube