YoVDO

Apache TVM: Optimizing ML Models for Edge Deployment - Deep Dive and Demo

Offered By: Databricks via YouTube

Tags

Deep Learning Courses Performance Tuning Courses Edge Computing Courses Hardware Acceleration Courses

Course Description

Overview

Explore the world of machine learning optimization in this 49-minute deep dive video on Apache TVM. Learn how this open-source compiler transforms complex deep learning models into lightweight software for edge devices, significantly improving inference speed and reducing costs across various hardware platforms. Discover the inner workings of Apache TVM, its latest features, and upcoming developments. Follow along with a live demonstration on optimizing a custom machine learning model. Gain insights into AI compilation challenges, TVM internals, fusion techniques, auto-scheduling, and real-world performance results. Compare public and private models, and understand why TVM is becoming an essential tool for ML practitioners aiming to enhance model efficiency and deployment capabilities.

Syllabus

Introduction
AI Compilation Wars
Machine Learning Compilers
Who is using TVM
The landscape of deep learning
Highlevel optimizations
Operators and nodes
TVM internals
How to use TVM
Fusion
Auto Scheduler
Auto Scheduler workflow
Task Scheduler workflow
Realworld results
The best of both worlds
Auto scheduling
Why use TVM
Live Demo
Uploading a new model
Performance results
Crossproduct results
Comparing public vs private models
Outro


Taught by

Databricks

Related Courses

Fog Networks and the Internet of Things
Princeton University via Coursera
AWS IoT: Developing and Deploying an Internet of Things
Amazon Web Services via edX
Business Considerations for 5G with Edge, IoT, and AI
Linux Foundation via edX
5G Strategy for Business Leaders
Linux Foundation via edX
Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity