YoVDO

Implicit MLE- Backpropagating Through Discrete Exponential Family Distributions

Offered By: Yannic Kilcher via YouTube

Tags

Deep Learning Courses Neural Networks Courses Backpropagation Courses Deep Networks Courses

Course Description

Overview

Explore a comprehensive video explanation of the paper "Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions". Delve into the challenges of incorporating discrete probability distributions and combinatorial optimization problems with neural networks. Learn about the Implicit Maximum Likelihood Estimation (I-MLE) framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. Discover how I-MLE enables backpropagation through discrete algorithms, allowing combinatorial optimizers to be part of a network's forward propagation. Follow along as the video breaks down key concepts, including the straight-through estimator, encoding discrete problems as inner products, and approximating marginals via perturb-and-MAP. Gain insights into the paper's contributions, methodology, and practical applications through detailed explanations and visual aids.

Syllabus

- Intro & Overview
- Sponsor: Weights & Biases
- Problem Setup & Contributions
- Recap: Straight-Through Estimator
- Encoding the discrete problem as an inner product
- From algorithm to distribution
- Substituting the gradient
- Defining a target distribution
- Approximating marginals via perturb-and-MAP
- Entire algorithm recap
- Github Page & Example


Taught by

Yannic Kilcher

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX