Empower Large Language Models Serving in Production with Cloud Native AI Technologies
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the challenges and solutions for deploying Large Language Models (LLMs) in production environments using Cloud Native AI technologies. Learn how to optimize LLM serving by extending KServe to handle OpenAI's streaming requests, reducing model loading time with Fluid and Vineyard, and implementing cost-effective auto-scaling strategies. Gain insights from KServe and Fluid maintainers on overcoming production challenges, and discover practical techniques for balancing performance and cost in LLM deployments. Understand the importance of timed auto-scaling with cronHPA and evaluate the cost-effectiveness of scaling processes. Benefit from real-world experiences and best practices for effectively utilizing Cloud Native AI in production environments.
Syllabus
Empower Large Language Models (LLMs) Serving in Production with Cloud Native...- Lize Cai & Yang Che
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Building Document Intelligence Applications with Azure Applied AI and Azure Cognitive ServicesMicrosoft via YouTube Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube AI Show - Ignite Recap: Arc-Enabled ML, Language Services, and OpenAI
Microsoft via YouTube Building Intelligent Applications with World-Class AI
Microsoft via YouTube Build an AI Image Generator with OpenAI & Node.js
Traversy Media via YouTube