YoVDO

Running ML Workloads with AWS Purpose-Built ML Accelerators and Ray

Offered By: Anyscale via YouTube

Tags

AWS Inferentia Courses Machine Learning Courses Python Courses Cloud Computing Courses Distributed Systems Courses Generative AI Courses Anyscale Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to leverage AWS purpose-built ML accelerators, including AWS Trainium and AWS Inferentia, for high-performance, cost-effective Generative AI applications in the cloud. Explore the new native support for these accelerators available in Ray, the popular open-source framework for scaling and productionizing AI workloads. Learn about Anyscale, the AI Application Platform for developing, running, and scaling AI, and its managed Ray service. Gain insights into how Ray powers the world's most ambitious AI workloads, from Generative AI and LLMs to computer vision, in this 13-minute session presented by Anyscale.

Syllabus

Running ML Workloads with AWS Purpose Build ML Accelerators and Ray


Taught by

Anyscale

Related Courses

Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and Anyscale
Anyscale via YouTube
Scalable and Cost-Efficient AI Workloads with AWS and Anyscale
Anyscale via YouTube
End-to-End LLM Workflows with Anyscale
Anyscale via YouTube
Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube
Deploying Many Models Efficiently with Ray Serve
Anyscale via YouTube