CMU Advanced NLP 2022 - Latent Variable Models
Offered By: Graham Neubig via YouTube
Course Description
Overview
Explore latent variable models in advanced natural language processing through this comprehensive lecture. Delve into generative vs. discriminative models, deterministic vs. random variables, and variational autoencoders. Learn about handling discrete latent variables, examine examples of variational autoencoders in NLP, and understand the difference between learning features and learning structure. Cover topics such as loss functions, variational inference, regularized autoencoders, sampling techniques, and the motivation behind using latent variables. Discover training methods for VAEs, including aggressive inference network learning, and explore the reparameterization trick and Gumbel-Softmax function. Gain insights into practical applications of these concepts in NLP tasks.
Syllabus
Introduction
Types of Variables
Latent Variable Models
Loss Function
Variational inference
Regularized Autoencoder
Sampling
ancestral sampling
conditioned language models
Motivation for latent variables
Training VAEs
Aggressive inference network learning
Latent variables
Discrete latent variables
Reparameterization
Random Sampling
Reparameterization Trick
Gumball Softmax
Gumball Function
Application Examples
Taught by
Graham Neubig
Related Courses
Natural Language ProcessingColumbia University via Coursera Natural Language Processing
Stanford University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera moocTLH: Nuevos retos en las tecnologĂas del lenguaje humano
Universidad de Alicante via MirĂadax Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam