Efficient and Portable AI/LLM Inference on the Edge Cloud - Workshop
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore efficient and portable AI/LLM inference on the edge cloud in this 48-minute workshop presented by Xiaowei Hu from Second State. Learn about the challenges of running AI workloads on heterogeneous hardware and discover how WebAssembly (Wasm) offers a lightweight, fast, and portable solution. Gain hands-on experience creating and running Wasm-based AI applications on edge servers or local hosts. Examine practical examples using AI models and libraries for media processing (Mediapipe), computer vision (YOLO, Llava), and natural language processing (Llama2 series). Follow along with live demonstrations and run all examples on your own laptop during the session, gaining valuable insights into efficient AI deployment strategies for edge computing environments.
Syllabus
Workshop: Efficient and Portable AI / LLM Inference on the Edge Cloud - Xiaowei Hu, Second State
Taught by
Linux Foundation
Tags
Related Courses
Fog Networks and the Internet of ThingsPrinceton University via Coursera AWS IoT: Developing and Deploying an Internet of Things
Amazon Web Services via edX Business Considerations for 5G with Edge, IoT, and AI
Linux Foundation via edX 5G Strategy for Business Leaders
Linux Foundation via edX Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity