YoVDO

Implicit MLE- Backpropagating Through Discrete Exponential Family Distributions

Offered By: Yannic Kilcher via YouTube

Tags

Deep Learning Courses Neural Networks Courses Backpropagation Courses Deep Networks Courses

Course Description

Overview

Explore a comprehensive video explanation of the paper "Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions". Delve into the challenges of incorporating discrete probability distributions and combinatorial optimization problems with neural networks. Learn about the Implicit Maximum Likelihood Estimation (I-MLE) framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. Discover how I-MLE enables backpropagation through discrete algorithms, allowing combinatorial optimizers to be part of a network's forward propagation. Follow along as the video breaks down key concepts, including the straight-through estimator, encoding discrete problems as inner products, and approximating marginals via perturb-and-MAP. Gain insights into the paper's contributions, methodology, and practical applications through detailed explanations and visual aids.

Syllabus

- Intro & Overview
- Sponsor: Weights & Biases
- Problem Setup & Contributions
- Recap: Straight-Through Estimator
- Encoding the discrete problem as an inner product
- From algorithm to distribution
- Substituting the gradient
- Defining a target distribution
- Approximating marginals via perturb-and-MAP
- Entire algorithm recap
- Github Page & Example


Taught by

Yannic Kilcher

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX