Pathfinder to GPU Offload in WASM
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the potential integration of GPU offloading models with WebAssembly (WASM) in this conference talk by Intel experts. Delve into the growing importance of parallelization in AI workloads and the challenges of implementing GPU support in WASM environments. Learn about the current limitations of WASM in handling multithreaded, shared memory, vectorized, and device offloaded workloads. Discover the ongoing efforts to integrate CPU-based OpenMP with WASM and the investigation into GPU offloading models such as OpenMP, CUDA, and SYCL. Understand the security implications and potential benefits of incorporating GPU offloading capabilities into WASM, paving the way for more secure, portable, and cloud-compatible AI applications.
Syllabus
Pathfinder to GPU Offload in WASM - Atanas Atanasov & Aaron Dorney, Intel
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Open Source GPU Compute Stack - Not Dancing the CUDA DanceLinux Plumbers Conference via YouTube HPX and GPU Parallelized STL
CppNow via YouTube Tensorflow on Open Source GPUs
linux.conf.au via YouTube But Mummy I Don't Want to Use CUDA - Open Source GPU Compute
linux.conf.au via YouTube Khronos Sycl Language Framework for C++ Accelerators - Take Advantage of All the MIPS
ACCU Conference via YouTube