YoVDO

Write Once Run Anywhere for GPUs - Portable AI Workloads with Rust and WebAssembly

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

WebAssembly Courses Cloud Computing Courses Rust Courses LLM (Large Language Model) Courses GPU Computing Courses Edge Computing Courses Cross-Platform Development Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the potential of Rust and WebAssembly (Wasm) for cross-platform AI workload deployment in this 41-minute conference talk by Michael Yuan from Second State. Learn about LlamaEdge, a lightweight and high-performance LLM inference runtime that leverages WasmEdge to provide a standard WASI-NN API for developers. Discover how this approach enables writing code once and running it on various devices, with WasmEdge handling the translation to native libraries like llama.cpp. Delve into the design and implementation of LlamaEdge, and follow code examples ranging from basic sentence completion to chat bots, RAG agents with vector databases, and Kubernetes-managed applications across heterogeneous clusters. Gain insights into the future of AI application development and deployment in the era of GPUs and cloud computing.

Syllabus

Write Once Run Anywhere, but for GPUs | GPU 时代的“一次编写,到处运行” - Michael Yuan, Second State


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)
Moscow Institute of Physics and Technology via Coursera
Practical Deep Learning For Coders
fast.ai via Independent
GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam
Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera
Getting Started with PyTorch
Coursera Project Network via Coursera