Bringing LLMs Everywhere Through Machine Learning Compilation
Offered By: The ASF via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the groundbreaking MLC-LLM project, an open-source initiative based on Apache TVM that enables running large language models (LLMs) on various devices, including PCs, mobile devices, and WebGPU with GPU acceleration. Delve into the challenges of deploying computationally intensive LLMs beyond traditional server environments with cloud GPUs. Learn how machine learning compilation techniques are revolutionizing the accessibility of generative AI and LLMs, potentially transforming numerous domains by bringing these powerful models to a wider range of devices and platforms.
Syllabus
Bringing Llm To Everywhere Via Machine Learning Compilation
Taught by
The ASF
Related Courses
Accelerating Deep Learning with GPUsIBM via Cognitive Class DevOps, DataOps, MLOps
Pragmatic AI Labs via edX Rust for Large Language Model Operations (LLMOps)
Pragmatic AI Labs via edX Advanced Generative Adversarial Networks (GANs)
Packt via Coursera Deep Learning with IBM
IBM via edX