Deploying Many Models Efficiently with Ray Serve
Offered By: Anyscale via YouTube
Course Description
Overview
Explore efficient deployment and management of multiple models using Ray Serve in this 26-minute conference talk. Gain comprehensive insights into serving numerous models while optimizing resource utilization and maintaining ease of use. Learn about three key features of Ray Serve: model composition, multi-application, and model multiplexing. Discover common industry patterns for serving many models and how Ray Serve simplifies management and enhances performance. Dive into case studies of Ray Serve users running many-model applications in production. Access the slide deck for additional information and visual aids. Understand how Ray, an open-source framework, powers ambitious AI workloads, including Generative AI, LLMs, and computer vision. Consider Anyscale's managed Ray service for developing, running, and scaling AI applications.
Syllabus
Deploying Many Models Efficiently with Ray Serve
Taught by
Anyscale
Related Courses
Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and AnyscaleAnyscale via YouTube Scalable and Cost-Efficient AI Workloads with AWS and Anyscale
Anyscale via YouTube End-to-End LLM Workflows with Anyscale
Anyscale via YouTube Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube Lessons From Fine-Tuning Llama-2
Anyscale via YouTube