Power-of-Two Quantization for Low Bitwidth and Hardware Compliant Neural Networks
Offered By: tinyML via YouTube
Course Description
Overview
Explore power-of-two quantization techniques for low bitwidth and hardware-compliant neural networks in this 24-minute conference talk from the tinyML Research Symposium 2022. Presented by Dominika Przewlocka-Rus, a researcher at Meta Reality Lab Research, the talk covers key problems in quantization, various quantization methods, and their key differences. Learn about straight-through estimation, examine results, and consider hardware implications. The presentation concludes with a Q&A session and acknowledgment of sponsors, providing valuable insights for those interested in optimizing neural networks for resource-constrained environments.
Syllabus
Introduction
Key Problems
Quantization Methods
Key Differences
Straight Through Estimation
Results
Hardware Considerations
QA
Sponsors
Taught by
tinyML
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX