YoVDO

SGD and Weight Decay Secretly Compress Your Neural Network

Offered By: MITCBMM via YouTube

Tags

Machine Learning Courses Deep Learning Courses Optimization Algorithms Courses Stochastic Gradient Descent Courses Model Compression Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intriguing concept of how Stochastic Gradient Descent (SGD) and weight decay techniques inadvertently compress neural networks in this insightful 55-minute conference talk by Tomer Galanti from MIT. Delve into the underlying mechanisms that contribute to this hidden compression effect, gaining a deeper understanding of how these widely-used optimization methods impact the efficiency and performance of deep learning models.

Syllabus

SGD and Weight Decay Secretly Compress Your Neural Network


Taught by

MITCBMM

Related Courses

Building Classification Models with scikit-learn
Pluralsight
Practical Deep Learning for Coders - Full Course
freeCodeCamp
Neural Networks Made Easy
Udemy
Intro to Deep Learning
Kaggle
Stochastic Gradient Descent
Great Learning via YouTube