A Technique for Extreme Compression of LSTM Models
Offered By: tinyML via YouTube
Course Description
Overview
Explore a technique for extreme compression of LSTM models using sparse structured additive matrices in this tinyML Talks webcast. Learn about structured matrices derived from Kronecker products and their effectiveness in compressing neural networks. Discover the concept of "doping" - adding an extremely sparse matrix to a structured matrix - and how it provides additional degrees of freedom for parameters. Understand the challenges of training LSTMs with doped structured matrices, including co-matrix adaptation and the need for co-matrix dropout regularization. Examine empirical evidence demonstrating the applicability of these concepts to multiple structured matrices. Delve into state-of-the-art accuracy results at large compression factors across natural language processing applications. Compare the doped Kronecker product compression technique to previous compression methods, pruning, and low-rank alternatives. Investigate the deployment of doped KP on commodity hardware and the resulting inference run-time speed-ups. Cover topics such as training curves, output feature vectors, regularization techniques, controlling sparsity, and limitations of the approach.
Syllabus
Intro
What is a structured matrix
Why arent structured matrices more widespread
A large NLP application
Vanilla Chronic Product
Training Curves
Training Techniques
Training Results
Output Feature Vectors
Cometrics Adaptation
Regularization
Cometric Synchronization
Results
Training data
Controlling sparsity
Compression
Limitations
Conclusion
Rule of Thumb
More than 2 matrices
Why the perplexity increased
Posttraining decomposition
Sponsors
Taught by
tinyML
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent