Bringing LLMs Everywhere Through Machine Learning Compilation
Offered By: The ASF via YouTube
Course Description
Overview
Explore the groundbreaking MLC-LLM project, an open-source initiative based on Apache TVM that enables running large language models (LLMs) on various devices, including PCs, mobile devices, and WebGPU with GPU acceleration. Delve into the challenges of deploying computationally intensive LLMs beyond traditional server environments with cloud GPUs. Learn how machine learning compilation techniques are revolutionizing the accessibility of generative AI and LLMs, potentially transforming numerous domains by bringing these powerful models to a wider range of devices and platforms.
Syllabus
Bringing Llm To Everywhere Via Machine Learning Compilation
Taught by
The ASF
Related Courses
Developing a Tabular Data ModelMicrosoft via edX Data Science in Action - Building a Predictive Churn Model
SAP Learning Serverless Machine Learning with Tensorflow on Google Cloud Platform 日本語版
Google Cloud via Coursera Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera Serverless Machine Learning con TensorFlow en GCP
Google Cloud via Coursera