Ray Observability 2.0 - Debugging Applications with New Tooling
Offered By: Anyscale via YouTube
Course Description
Overview
Explore new observability tools in Ray and Anyscale for debugging large-scale machine learning workloads in this 30-minute conference talk. Learn how to effectively troubleshoot both offline (preprocessing, training, tuning, inference) and online (serving) ML applications using advanced tooling. Follow along as the speaker demonstrates developing an ML workload and bringing it to production, showcasing the various debugging tools provided by Anyscale and Ray. Gain insights into fundamental observability techniques for beginners and discover advanced functionality for tackling complex errors. By the end, acquire practical knowledge on leveraging these tools to enhance your Ray applications and streamline your ML development process.
Syllabus
Ray Observability 2.0: How to Debug Your Ray Applications with New Observability Tooling
Taught by
Anyscale
Related Courses
Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and AnyscaleAnyscale via YouTube Scalable and Cost-Efficient AI Workloads with AWS and Anyscale
Anyscale via YouTube End-to-End LLM Workflows with Anyscale
Anyscale via YouTube Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube Deploying Many Models Efficiently with Ray Serve
Anyscale via YouTube