vLLM on Kubernetes in Production - Deployment and Cost-Saving Strategies
Offered By: Kubesimplify via YouTube
Course Description
Overview
Explore the fundamentals of vLLM, a fast and user-friendly library for LLM inference and serving, in this 28-minute video tutorial. Learn how to run vLLM locally and deploy it on Kubernetes in a production environment with GPU-attached nodes using a DaemonSet. Follow along with a hands-on demonstration that guides you through the process of implementing vLLM in a production setting. Gain insights from a real-world case study on cost-effective deployment of open-source AI technologies, as detailed in the accompanying blog post. Presented by John McBride, this Kubesimplify tutorial offers practical knowledge for efficiently leveraging vLLM on Kubernetes.
Syllabus
vLLM on Kubernetes in Production
Taught by
Kubesimplify
Related Courses
Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)Moscow Institute of Physics and Technology via Coursera Practical Deep Learning For Coders
fast.ai via Independent GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera Getting Started with PyTorch
Coursera Project Network via Coursera