Neural Nets for NLP - Latent Variable Models
Offered By: Graham Neubig via YouTube
Course Description
Overview
Syllabus
Intro
Discriminative vs. Generative Models
Quiz: What Types of Variables?
Why Latent Random Variables?
A Latent Variable Model
What is Our Loss Function? . We would like to maximize the corpus log likelihood
Disconnect Between Samples and Objective
VAE Objective . We can create an optimizable objective matching our problem, starting with KL divergence
Interpreting the VAE Objective
Problem: Straightforward Sampling is Inefficient Current
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Generating from Language Models
Motivation for Latent Variables
Difficulties in Training
KL Divergence Annealing
Weaken the Decoder
Discrete Latent Variables?
Enumeration
Method 2: Sampling
Reparameterization (Maddison et al. 2017. Jang et al. 2017)
Taught by
Graham Neubig
Related Courses
Natural Language ProcessingColumbia University via Coursera Natural Language Processing
Stanford University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera moocTLH: Nuevos retos en las tecnologĂas del lenguaje humano
Universidad de Alicante via MirĂadax Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam