YoVDO

TinyML Talks Pakistan - SuperSlash - Unifying Design Space Exploration and Model Compression

Offered By: tinyML via YouTube

Tags

TinyML Courses Deep Learning Courses Model Compression Courses

Course Description

Overview

Explore a comprehensive methodology for unifying design space exploration and model compression in deep learning accelerators for TinyML applications. Delve into the challenges of deploying Deep Learning models on resource-constrained embedded devices and learn about SuperSlash, an innovative solution that combines Design Space Exploration (DSE) and Model Compression techniques. Discover how SuperSlash estimates off-chip memory access volume overhead, evaluates data reuse strategies, and implements layer fusion to optimize performance. Gain insights into the pruning process guided by a ranking function based on explored off-chip memory access costs. Examine the application of this technique to fit large DNN models on accelerators with limited computational resources, using examples such as MobileNet V1. Engage with a detailed analysis of the extended design space, multilayer fusion, and the impact of these strategies on TinyML implementations.

Syllabus

Introduction
Strategic Partners
Welcome
Agenda
Deep Neural Networks
Image Classification Networks
Hardware accelerators
Motivation Analysis
Model Pruning
Layer Fusion
Methodology
Results
Extended Design Space
Mobilenet V1
Conclusion
Questions
Multilayer Fusion
Crowd Sponsors


Taught by

tinyML

Related Courses

Learning TinyML
LinkedIn Learning
Deploying TinyML
Harvard University via edX
TinyML for Good: Ready to Take Off for a Big Impact
tinyML via YouTube
DavinSy Hands-on - Continuous Learning Beyond the Edge
Arm Software Developers via YouTube
Create and Connect Secure and Trustworthy IoT Devices
Microsoft via YouTube