Bringing LLMs Everywhere Through Machine Learning Compilation
Offered By: The ASF via YouTube
Course Description
Overview
Explore the groundbreaking MLC-LLM project, an open-source initiative based on Apache TVM that enables running large language models (LLMs) on various devices, including PCs, mobile devices, and WebGPU with GPU acceleration. Delve into the challenges of deploying computationally intensive LLMs beyond traditional server environments with cloud GPUs. Learn how machine learning compilation techniques are revolutionizing the accessibility of generative AI and LLMs, potentially transforming numerous domains by bringing these powerful models to a wider range of devices and platforms.
Syllabus
Bringing Llm To Everywhere Via Machine Learning Compilation
Taught by
The ASF
Related Courses
Fundamentals of Accelerated Computing with CUDA C/C++Nvidia via Independent Using GPUs to Scale and Speed-up Deep Learning
IBM via edX Deep Learning
IBM via edX Deep Learning with IBM
IBM via edX Accelerating Deep Learning with GPUs
IBM via Cognitive Class