SGD and Weight Decay Secretly Compress Your Neural Network
Offered By: MITCBMM via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intriguing concept of how Stochastic Gradient Descent (SGD) and weight decay techniques inadvertently compress neural networks in this insightful 55-minute conference talk by Tomer Galanti from MIT. Delve into the underlying mechanisms that contribute to this hidden compression effect, gaining a deeper understanding of how these widely-used optimization methods impact the efficiency and performance of deep learning models.
Syllabus
SGD and Weight Decay Secretly Compress Your Neural Network
Taught by
MITCBMM
Related Courses
Audio Classification with TensorFlowCoursera Project Network via Coursera Logistic Regression with Python and Numpy
Coursera Project Network via Coursera Deep Learning with PyTorch : Generative Adversarial Network
Coursera Project Network via Coursera Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization
DeepLearning.AI via Coursera تعزيز الشبكات العصبية : ضبط وتحسين مقياس فرط المعلمات
DeepLearning.AI via Coursera