YoVDO

Deploying LLM Workloads on Kubernetes Using WasmEdge and Kuasar

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Kubernetes Courses WebAssembly Courses LLM (Large Language Model) Courses LLaMA (Large Language Model Meta AI) Courses GPU Computing Courses Scalability Courses Containerization Courses Cloud Native Computing Courses WasmEdge Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the deployment of Large Language Model (LLM) workloads on Kubernetes using WasmEdge and Kuasar in this keynote presentation from the Cloud Native Computing Foundation (CNCF) conference. Discover how these innovative technologies address challenges in running LLMs, including complex package installations, GPU compatibility issues, scaling limitations, and security vulnerabilities. Learn how WasmEdge enables the development of fast, agile, resource-efficient, and secure LLM applications, while Kuasar facilitates running applications on Kubernetes with faster container startup and reduced management overhead. Witness a demonstration of running Llama3-8B on a Kubernetes cluster using WasmEdge and Kuasar as container runtimes. Gain insights into how Kubernetes enhances efficiency, scalability, and stability in LLM deployment and operations, providing valuable knowledge for developers and organizations looking to leverage the power of LLMs in cloud-native environments.

Syllabus

Keynote: Deploying LLM Workloads on Kubernetes by WasmEdge and Kuasar - Tianyang Zhang & Vivian Hu


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Kubernetes: Cloud Native Ecosystem
LinkedIn Learning
Kubernetes: Cloud Native Ecosystem
LinkedIn Learning
Cloud Native Certified Kubernetes Administrator (CKA) (Legacy)
A Cloud Guru
Implement Resiliency in a Cloud-Native ASP.NET Core Microservice
Microsoft via YouTube
Open Networking & Edge Executive Forum 2021 - Day 1 Part 2 Sessions
Linux Foundation via YouTube