Deploying LLM Workloads on Kubernetes Using WasmEdge and Kuasar
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the deployment of Large Language Model (LLM) workloads on Kubernetes using WasmEdge and Kuasar in this keynote presentation from the Cloud Native Computing Foundation (CNCF) conference. Discover how these innovative technologies address challenges in running LLMs, including complex package installations, GPU compatibility issues, scaling limitations, and security vulnerabilities. Learn how WasmEdge enables the development of fast, agile, resource-efficient, and secure LLM applications, while Kuasar facilitates running applications on Kubernetes with faster container startup and reduced management overhead. Witness a demonstration of running Llama3-8B on a Kubernetes cluster using WasmEdge and Kuasar as container runtimes. Gain insights into how Kubernetes enhances efficiency, scalability, and stability in LLM deployment and operations, providing valuable knowledge for developers and organizations looking to leverage the power of LLMs in cloud-native environments.
Syllabus
Keynote: Deploying LLM Workloads on Kubernetes by WasmEdge and Kuasar - Tianyang Zhang & Vivian Hu
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Fundamentals of Containers, Kubernetes, and Red Hat OpenShiftRed Hat via edX Configuration Management for Containerized Delivery
Microsoft via edX Getting Started with Google Kubernetes Engine - Español
Google Cloud via Coursera Getting Started with Google Kubernetes Engine - 日本語版
Google Cloud via Coursera Architecting with Google Kubernetes Engine: Foundations en Español
Google Cloud via Coursera