YoVDO

Empower Large Language Models Serving in Production with Cloud Native AI Technologies

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Auto-scaling Courses OpenAI Courses KServe Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions for deploying Large Language Models (LLMs) in production environments using Cloud Native AI technologies. Learn how to optimize LLM serving by extending KServe to handle OpenAI's streaming requests, reducing model loading time with Fluid and Vineyard, and implementing cost-effective auto-scaling strategies. Gain insights from KServe and Fluid maintainers on overcoming production challenges, and discover practical techniques for balancing performance and cost in LLM deployments. Understand the importance of timed auto-scaling with cronHPA and evaluate the cost-effectiveness of scaling processes. Benefit from real-world experiences and best practices for effectively utilizing Cloud Native AI in production environments.

Syllabus

Empower Large Language Models (LLMs) Serving in Production with Cloud Native...- Lize Cai & Yang Che


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Data Engineering on Google Cloud Platform 日本語版
Google Cloud via Coursera
Cloud Computing Fundamentals on Alibaba Cloud
Alibaba Cloud Academy via Coursera
Launch an auto-scaling AWS EC2 virtual machine
Coursera Project Network via Coursera
Cloud Computing With Amazon Web Services
Udemy
AWS Certified Solution Architect - Associate 2020
Udemy