Integrating Constraints into Deep Learning Architectures with Structured Layers
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the integration of constraints into deep learning architectures through structured layers in this 50-minute lecture by Zico Kolter from CMU Bosch. Delve into topics such as deep equilibrium models, explicit and implicit layers, SAT optimization, and the implicit function theorem. Examine weight-tied input-injected networks, the expansion of depth to capture different layer types, and the concept of equilibrium points in deep networks. Gain insights into sequence modeling and smallscale benchmarks as part of the "Emerging Challenges in Deep Learning" series presented at the Simons Institute.
Syllabus
Introduction
Deep Equilibrium Models
Depth
Agenda
Explicit Layers
Implicit Layers
Sat Optimization
Implicit Function Theorem
Takeaway
Proposed Class of Structured Layers
Weight Tied Input Injected Networks
Expanding Depth to Capture Both Layers
Deep Networks
Summary
Stacking Layers
Do they exist
Equilibrium point
Residual point
Sequence modeling
Smallscale benchmarks
Taught by
Simons Institute
Related Courses
Applied Deep Learning: Build a Chatbot - Theory, ApplicationUdemy Can Wikipedia Help Offline Reinforcement Learning? - Paper Explained
Yannic Kilcher via YouTube Infinite Memory Transformer - Research Paper Explained
Yannic Kilcher via YouTube Recurrent Neural Networks and Transformers
Alexander Amini via YouTube MIT 6.S191 - Recurrent Neural Networks
Alexander Amini via YouTube