YoVDO

Saving 95% of Your Edge Power with Sparsity to Enable TinyML

Offered By: tinyML via YouTube

Tags

Machine Learning Courses Deep Learning Courses Neural Networks Courses Edge Computing Courses

Course Description

Overview

Explore techniques for reducing power consumption in edge machine learning applications through a tinyML Talks webcast featuring Jon Tapson from GrAI Matter Labs. Learn about the unique characteristics of edge ML tasks, focusing on continuous real-time processes with streaming data. Discover how exploiting multiple types of sparsity can significantly reduce computation needs, leading to lower latency and power consumption for tiny ML tasks. Gain insights into time, space, connectivity, and activation sparsity in edge processes and their practical impact on computation. Get introduced to the GrAI Core architecture and its event-based paradigm for maximizing sparsity exploitation in edge inference loads. Understand how these techniques can save up to 95% of edge power, enabling more efficient tinyML applications.

Syllabus

Intro
About Jon Tapson
Edge workloads are different
Edge data is massive
Speech waveforms
What is sparsity
Deep neural networks
Fanout
Basic CNN
Typical gains
Neural Network Accelerator
How it works
Events
Use cases
Software stack
Runtime support
Sparsity performance
Summary
Questions
Conclusion
Edge Impulse
Sponsor
Next talk
Thanks


Taught by

tinyML

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX