Deploying LLM Workloads on Kubernetes with WasmEdge and Kuasar
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the deployment of Large Language Model (LLM) workloads on Kubernetes using WasmEdge and Kuasar in this informative keynote presentation. Discover how these innovative technologies address challenges in running LLMs, including complex package installations, GPU compatibility issues, scaling limitations, and security vulnerabilities. Learn about WasmEdge's solution for developing fast, agile, resource-efficient, and secure LLM applications, as well as Kuasar's ability to enable faster container startup and reduced management overhead on Kubernetes. Witness a demonstration of running Llama3-8B on a Kubernetes cluster using WasmEdge and Kuasar as container runtimes. Gain insights into how Kubernetes enhances efficiency, scalability, and stability in LLM deployment and operations, providing valuable knowledge for developers and IT professionals working with advanced AI models.
Syllabus
Keynote: Deploying LLM Workloads on Kubernetes by WasmEdge and Kuasar - Tianyang Zhang & Vivian Hu
Taught by
Linux Foundation
Tags
Related Courses
Software as a ServiceUniversity of California, Berkeley via Coursera Software Defined Networking
Georgia Institute of Technology via Coursera Pattern-Oriented Software Architectures: Programming Mobile Services for Android Handheld Systems
Vanderbilt University via Coursera Web-Technologien
openHPI Données et services numériques, dans le nuage et ailleurs
Certificat informatique et internet via France Université Numerique