YoVDO

Neural Nets for NLP - Models with Latent Random Variables

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Variational Autoencoders Courses Generative Models Courses

Course Description

Overview

Explore models with latent random variables in this comprehensive lecture from CMU's Neural Networks for NLP course. Delve into the distinctions between generative and discriminative models, as well as deterministic and random variables. Examine Variational Autoencoders (VAEs) in depth, including their structure, learning process, and challenges in training. Learn techniques for handling discrete latent variables and discover practical applications of VAEs in natural language processing. Gain insights into deep structured latent variable models and their importance in specifying interpretable structures like POS tags and dependency parse trees. Understand the probabilistic perspective on VAEs, explore variational inference methods, and study solutions to common training difficulties such as the re-parameterization trick and weakening the decoder.

Syllabus

Intro
Discriminative vs. Generative Models
Quiz: What Types of Variables?
What is Latent Random Variable Model
Why Latent Variable Models?
Deep Structured Latent Variable Models • Specify structure, but interpretable structure is often discrete e.g. POS tags, dependency parse trees
Examples of Deep Latent Variable Models
A probabilistic perspective on Variational Auto-Encoder
What is Our Loss Function?
Practice
Variational Inference • Variational inference approximates the true posterior poll with a family of distributions
Variational Inference • Variational inference approximates the true posterior polar with a family of distributions
Variational Auto-Encoders
Variational Autoencoders
Learning VAE
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Difficulties in Training . Of the two components in the VAE objective, the KL divergence term is much easier to learn
Solution 3
Weaken the Decoder
Discrete Latent Variables?
Method 1: Enumeration
Solution 4


Taught by

Graham Neubig

Related Courses

Deep Learning – Part 2
Indian Institute of Technology Madras via Swayam
Image Compression and Generation using Variational Autoencoders in Python
Coursera Project Network via Coursera
Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera
Generative Models
Serrano.Academy via YouTube
NVAE- A Deep Hierarchical Variational Autoencoder
Yannic Kilcher via YouTube