YoVDO

CMU Neural Nets for NLP - Models with Latent Random Variables

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Language Models Courses Generative Models Courses KL Divergence Courses

Course Description

Overview

Explore models with latent random variables in neural networks for natural language processing through this comprehensive lecture. Delve into the differences between discriminative and generative models, understanding the importance of latent random variables in NLP tasks. Learn about the Variational Autoencoder (VAE) objective, its interpretation, and solutions to sampling issues. Discover techniques for generating language models with latent variables, including KL divergence annealing and weakening the decoder. Examine methods for handling discrete latent variables, such as enumeration and the Gumbel-Softmax technique. Investigate practical applications in controllable text generation and symbol sequence latent variables. Gain insights into the challenges and solutions in training these models, equipping yourself with advanced knowledge in neural network approaches for NLP.

Syllabus

Intro
Discriminative vs. Generative Models • Discriminative model: calculate the probability of output given
Quiz: What Types of Variables?
Why Latent Random Variables?
An Example (Goersch 2016)
Problem: Straightforward Sampling is Inefficient
Solution: "Inference Model" • Predict which latent point produced the data point using inference
Disconnect Between Samples and Objective
VAE Objective • We can create an optimizable objective matching our problem, starting with KL divergence
Interpreting the VAE Objective
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Generating from Language Models
Motivation for Latent Variables . Allows for a consistent latent space of sentences?
Difficulties in Training
KL Divergence Annealing
Weaken the Decoder
Discrete Latent Variables?
Method 1: Enumeration
Reparameterization (Maddison et al. 2017, Jang et al. 2017)
Gumbel-Softmax • A way to soften the decision and allow for continuous gradients
Variational Models of Language Processing (Miao et al. 2016)
Controllable Text Generation (Hu et al. 2017)
Symbol Sequence Latent Variables (Miao and Blunsom 2016)


Taught by

Graham Neubig

Related Courses

Advanced Deep Learning Methods for Healthcare
University of Illinois at Urbana-Champaign via Coursera
Deep Learning
Illinois Institute of Technology via Coursera
Understanding Artificial Intelligence
DataCamp
Google Gemini AI Course for Beginners
freeCodeCamp
Coding with Generative AI
Fractal Analytics via Coursera