YoVDO

Effortless Scalability: Orchestrating Large Language Model Inference with Kubernetes

Offered By: Linux Foundation via YouTube

Tags

Kubernetes Courses Inference Courses Scalability Courses Orchestration Courses Containerization Courses Custom Resource Definitions Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of deploying and orchestrating large open-source inference models on Kubernetes in this 27-minute talk from the Linux Foundation. Dive into automating the deployment of heavyweight models like Falcon and Llama 2 using Kubernetes Custom Resource Definitions (CRDs) for seamless management of large model files through container images. Learn about streamlining deployment with an HTTP server for inference calls, eliminating manual tuning of deployment parameters with preset configurations, and auto-provisioning GPU nodes based on specific model requirements. Discover how to empower users to deploy containerized models effortlessly by providing pod templates in the workspace custom resource inference field, allowing the controller to dynamically create deployment workloads utilizing all GPU nodes. Gain insights into optimizing resource utilization and simplifying the deployment process for large language model inference in the rapidly evolving AI/ML landscape.

Syllabus

Effortless Scalability: Orchestrating Large Language Model Inference w... Joinal Ahmed & Nirav Kumar


Taught by

Linux Foundation

Tags

Related Courses

Ultimate Prometheus
Udemy
Creating Custom Resources in Kubernetes
Pluralsight
Extending Kubernetes with Operator Patterns
LinkedIn Learning
Extending Kubernetes - Moving Compose on Kubernetes from a CRD to API Aggregation
Docker via YouTube
Introduction to the Operator SDK - Building Kubernetes Operators
Rawkode Academy via YouTube