YoVDO

SWIS: Shared Weight Bit Sparsity for Efficient Neural Network Acceleration

Offered By: tinyML via YouTube

Tags

TinyML Courses Quantization Courses Energy Efficiency Courses Scheduling Algorithms Courses

Course Description

Overview

Explore the SWIS – Shared Weight bIt Sparsity framework for efficient neural network acceleration in this 20-minute conference talk from the tinyML Research Symposium 2021. Delve into the quantization technique that improves performance and storage compression through offline weight decomposition and scheduling algorithms. Learn how SWIS achieves significant accuracy improvements when quantizing MobileNet-v2, and discover its potential for up to 6X speedup and 1.8X energy improvement over state-of-the-art bit-serial architectures. Follow the presentation as it covers the introduction, base sparsity, quantization error, base serial multiplier, SWIS architecture, computation animation, scheduling, retraining, and concludes with a Q&A session.

Syllabus

Introduction
Why we need SWIS
Base Sparsity
Quantization Error
Base Serial Multiplier
SWIS Architecture
SWIS Computation Animation
SWIS Scheduling
SWIS Retraining
Questions
Sponsors


Taught by

tinyML

Related Courses

UT.1.01x: Energy 101
The University of Texas at Austin via edX
Data Analytics in Business
IEEE via edX
Power Up: English for the Energy Transition
Center for Technology Enhanced Learning via iversity
Introduction to Sustainable Construction
Universidad de Cantabria via Miríadax
Eficiencia energética en instalaciones de iluminación
Universitat Jaume I via Independent