Neural Nets for NLP - Latent Variable Models
Offered By: Graham Neubig via YouTube
Course Description
Overview
Syllabus
Intro
Discriminative vs. Generative Models
Quiz: What Types of Variables?
Why Latent Random Variables?
A Latent Variable Model
What is Our Loss Function? . We would like to maximize the corpus log likelihood
Disconnect Between Samples and Objective
VAE Objective . We can create an optimizable objective matching our problem, starting with KL divergence
Interpreting the VAE Objective
Problem: Straightforward Sampling is Inefficient Current
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Generating from Language Models
Motivation for Latent Variables
Difficulties in Training
KL Divergence Annealing
Weaken the Decoder
Discrete Latent Variables?
Enumeration
Method 2: Sampling
Reparameterization (Maddison et al. 2017. Jang et al. 2017)
Taught by
Graham Neubig
Related Courses
Topographic VAEs Learn Equivariant Capsules - Machine Learning Research Paper ExplainedYannic Kilcher via YouTube Deep Generative Modeling
Alexander Amini via YouTube Deep Generative Modeling
Alexander Amini via YouTube Deep Generative Modeling
Alexander Amini via YouTube Learning What We Know and Knowing What We Learn - Gaussian Process Priors for Neural Data Analysis
MITCBMM via YouTube