Building a Multi-Cluster Privately Hosted LLM Serving Platform on Kubernetes
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the challenges and solutions of building a cloud-agnostic, privately hosted Large Language Model (LLM) serving platform on Kubernetes in this 26-minute conference talk. Discover how Predibase tackled the complexities of hosting LLMs, including their large size and GPU resource requirements. Learn about the architecture of their control plane and dataplane, secured with an Istio service mesh, and the implementation of KEDA for event-driven auto-scaling to support serverless inference of open-source models. Gain valuable insights into deploying LLMs and acquire practical knowledge on applying tools and techniques for your own organization's LLM hosting needs.
Syllabus
Building a Multi-Cluster Privately Hosted LLM Serving Platform on Ku... Julian Bright & Noah Yoshida
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)Moscow Institute of Physics and Technology via Coursera Practical Deep Learning For Coders
fast.ai via Independent GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera Getting Started with PyTorch
Coursera Project Network via Coursera