YoVDO

The Emergence of Essential Sparsity in Large Pre-trained Models

Offered By: Unify via YouTube

Tags

Transformers Courses Artificial Intelligence Courses Machine Learning Courses Pre-trained Models Courses Model Compression Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the concept of essential sparsity in large pre-trained models through this insightful 1-hour 10-minute talk by Professor Atwas Wang from the University of Austin Texas. Delve into efficient methods for handling complex and expansive pre-trained transformer models in contemporary machine learning. Discover the threshold at which removing small-magnitude weights significantly impacts performance compared to lower levels of sparsity. Gain access to the project code on GitHub and learn about the research paper "The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter." Explore additional resources such as The Deep Dive newsletter for the latest AI research and industry trends, and Unify's blog for insights into the AI deployment stack. Connect with Unify through their website, GitHub, Discord, and Twitter to stay updated on AI advancements, transformers, language models, and sparsification techniques.

Syllabus

The Emergence of Essential Sparsity in Large Pre-trained Models


Taught by

Unify

Related Courses

Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera
Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity
Building Deep Learning Applications with Keras 2.0
LinkedIn Learning
Expediting Deep Learning with Transfer Learning: PyTorch Playbook
Pluralsight
2024 Introduction to Spacy for Natural Language Processing
Udemy