Scaling Inference Deployments with NVIDIA Triton Inference Server and Ray Serve
Offered By: Anyscale via YouTube
Course Description
Overview
Explore the collaboration between Ray Serve and NVIDIA Triton Inference Server in this conference talk from Ray Summit 2024. Learn about the new Python API for Triton Inference Server and its seamless integration with Ray Serve applications. Discover how this partnership enhances capabilities for scaling inference deployments, combining the strengths of both open-source platforms. Gain insights into improving ML model performance through a stable diffusion demo and understand the benefits of utilizing Triton's advanced optimization tools like Performance and Model Analyzer. See how to fine-tune model configurations based on specific throughput and latency requirements, empowering you to optimize your inference deployments effectively.
Syllabus
Scaling Inference Deployments with NVIDIA Triton Inference Server and Ray Serve | Ray Summit 2024
Taught by
Anyscale
Related Courses
Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and AnyscaleAnyscale via YouTube Scalable and Cost-Efficient AI Workloads with AWS and Anyscale
Anyscale via YouTube End-to-End LLM Workflows with Anyscale
Anyscale via YouTube Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube Deploying Many Models Efficiently with Ray Serve
Anyscale via YouTube