Serving Large Language Models with KubeRay on TPUs
Offered By: Anyscale via YouTube
Course Description
Overview
Discover how to serve large language models using KubeRay on TPUs in this 25-minute talk from Anyscale. Learn about the technical challenges of serving models with hundreds of billions of parameters and explore how integrating KubeRay with TPUs creates a powerful platform for efficient LLM deployment. Gain insights into the benefits of this approach, including increased performance, improved scalability, reduced costs, enhanced flexibility, and better monitoring capabilities. Understand how KubeRay simplifies Ray cluster management on cloud platforms, while TPUs provide specialized processing power for neural network workloads. Access the accompanying slide deck for visual references and dive deeper into the world of distributed machine learning with Ray, the popular open-source framework for scaling AI workloads.
Syllabus
Serving Large Language Models with KubeRay on TPUs
Taught by
Anyscale
Related Courses
Financial Sustainability: The Numbers side of Social Enterprise+Acumen via NovoEd Cloud Computing Concepts: Part 2
University of Illinois at Urbana-Champaign via Coursera Developing Repeatable ModelsĀ® to Scale Your Impact
+Acumen via Independent Managing Microsoft Windows Server Active Directory Domain Services
Microsoft via edX Introduction aux conteneurs
Microsoft Virtual Academy via OpenClassrooms