Shrinking Deep Learning Models: Techniques and Energy Considerations
Offered By: media.ccc.de via YouTube
Course Description
Overview
Explore the world of deep learning model optimization in this 41-minute conference talk from the 37th Chaos Communication Congress (37C3). Gain insights into the energy consumption of modern machine learning models and discover innovative techniques for shrinking their size without compromising performance. Learn about the challenges posed by the end of Moore's law and how neural network parameter counts continue to grow exponentially. Examine various methods for managing model complexity, including low-bitwidth integer representation, pruning redundant connections, and knowledge distillation. Investigate the potential of running cutting-edge language models on consumer-grade GPUs and understand the implications for accessibility and experimentation. Acquire the knowledge needed to make informed decisions about the usage and regulation of deep learning models, while considering their environmental impact and resource requirements.
Syllabus
37C3 - What is this? A machine learning model for ants?
Taught by
media.ccc.de
Related Courses
Sustainability of Food Systems: A Global Life Cycle PerspectiveUniversity of Minnesota via Coursera La Responsabilidad Social Corporativa: Ruta a la Sostenibilidad
MirĂadax Wind, Waves and Tides: Alternative Energy Systems
University of Toronto via Coursera Energy and the Earth
University of Wisconsin–Madison via Coursera Shale Gas and Fracking: the Politics and Science
The University of Nottingham via FutureLearn