YoVDO

Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS and Cloud-Native Environments

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

WebAssembly Courses Kubernetes Courses GPU Computing Courses Scalability Courses Edge Computing Courses Cross-Platform Development Courses Cloud Native Computing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the advantages of using WebAssembly (Wasm) for AI inference tasks in cloud-native ecosystems through this 25-minute conference talk. Discover how Wasm enables developers to create AI applications on their personal computers that can be uniformly executed across various hardware platforms, including GPUs, CPUs, operating systems, and edge cloud environments. Learn about Wasm's seamless integration with cloud-native frameworks, enhancing the deployment and scalability of AI applications. Gain insights into how Wasm provides a flexible and efficient solution for diverse cloud-native architectures, including Kubernetes, allowing developers to fully harness the potential of large language models (LLMs), particularly open-source ones. Tailored for cloud-native practitioners and AI developers, this talk offers valuable knowledge on maximizing AI application potential by leveraging Wasm's cross-platform capabilities, ensuring consistency, cost-effectiveness, and efficiency in AI inference across various computing environments.

Syllabus

Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS & Cloud-Nativ... Miley Fu & Lucas Lu


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Fog Networks and the Internet of Things
Princeton University via Coursera
AWS IoT: Developing and Deploying an Internet of Things
Amazon Web Services via edX
Business Considerations for 5G with Edge, IoT, and AI
Linux Foundation via edX
5G Strategy for Business Leaders
Linux Foundation via edX
Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity