YoVDO

Implicit MLE- Backpropagating Through Discrete Exponential Family Distributions

Offered By: Yannic Kilcher via YouTube

Tags

Deep Learning Courses Neural Networks Courses Backpropagation Courses Deep Networks Courses

Course Description

Overview

Explore a comprehensive video explanation of the paper "Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions". Delve into the challenges of incorporating discrete probability distributions and combinatorial optimization problems with neural networks. Learn about the Implicit Maximum Likelihood Estimation (I-MLE) framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. Discover how I-MLE enables backpropagation through discrete algorithms, allowing combinatorial optimizers to be part of a network's forward propagation. Follow along as the video breaks down key concepts, including the straight-through estimator, encoding discrete problems as inner products, and approximating marginals via perturb-and-MAP. Gain insights into the paper's contributions, methodology, and practical applications through detailed explanations and visual aids.

Syllabus

- Intro & Overview
- Sponsor: Weights & Biases
- Problem Setup & Contributions
- Recap: Straight-Through Estimator
- Encoding the discrete problem as an inner product
- From algorithm to distribution
- Substituting the gradient
- Defining a target distribution
- Approximating marginals via perturb-and-MAP
- Entire algorithm recap
- Github Page & Example


Taught by

Yannic Kilcher

Related Courses

Deep Learning Fundamentals with Keras
IBM via edX
Deep Learning Essentials
Université de Montréal via edX
Deep Learning with TensorFlow 2.0
Udemy
Data Science: Deep Learning and Neural Networks in Python
Udemy
Нейронные сети и глубокое обучение
DeepLearning.AI via Coursera