YoVDO

Efficient and Portable AI/LLM Inference on the Edge Cloud - Workshop

Offered By: Linux Foundation via YouTube

Tags

Edge Computing Courses Computer Vision Courses WebAssembly Courses YOLO Courses LLM (Large Language Model) Courses Cloud Native Computing Courses Mediapipe Courses LLaMA2 Courses LLaVA Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore efficient and portable AI/LLM inference on the edge cloud in this 48-minute workshop presented by Xiaowei Hu from Second State. Learn about the challenges of running AI workloads on heterogeneous hardware and discover how WebAssembly (Wasm) offers a lightweight, fast, and portable solution. Gain hands-on experience creating and running Wasm-based AI applications on edge servers or local hosts. Examine practical examples using AI models and libraries for media processing (Mediapipe), computer vision (YOLO, Llava), and natural language processing (Llama2 series). Follow along with live demonstrations and run all examples on your own laptop during the session, gaining valuable insights into efficient AI deployment strategies for edge computing environments.

Syllabus

Workshop: Efficient and Portable AI / LLM Inference on the Edge Cloud - Xiaowei Hu, Second State


Taught by

Linux Foundation

Tags

Related Courses

LLaMA2 for Multilingual Fine Tuning
Sam Witteveen via YouTube
Set Up a Llama2 Endpoint for Your LLM App in OctoAI
Docker via YouTube
AI Engineer Skills for Beginners: Code Generation Techniques
All About AI via YouTube
Training and Evaluating LLaMA2 Models with Argo Workflows and Hera
CNCF [Cloud Native Computing Foundation] via YouTube
LangChain Crash Course - 6 End-to-End LLM Projects with OpenAI, LLAMA2, and Gemini Pro
Krish Naik via YouTube