YoVDO

Neural Nets for NLP - Models with Latent Random Variables

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Variational Autoencoders Courses Generative Models Courses

Course Description

Overview

Explore a comprehensive lecture on models with latent random variables in neural networks for natural language processing. Delve into the distinctions between generative and discriminative models, as well as deterministic and random variables. Gain insights into Variational Autoencoders, their architecture, and loss functions. Learn techniques for handling discrete latent variables, including the Gumbel-Softmax trick. Examine real-world applications of these concepts in NLP tasks, such as language modeling and semantic similarity. Engage with discussion questions to deepen understanding of tree-structured latent variables and their implications for NLP models.

Syllabus

Introduction
Discriminative vs generative
Observed vs latent variables
Quiz
Latent Variable Models
Types of latent random variables
Example
Loss Function
Variational inference
Reconstruction loss and kl regularizer
Regularized auto encoder
Regularized autoencoder
Learning the VAE
Reparameterization Trick
General
Language
VAE
Reparameterization
Motivation
Consistency
Semantic Similarity
Solutions
Free Bits
Weaken Decoder
Aggressive Inference Network
Handling Discrete latent variables
Discrete latent variables
Sampling discrete variables
Gumball softmax
Application examples
Discrete random variables
Tree structured latent variables
Discussion question


Taught by

Graham Neubig

Related Courses

Deep Learning – Part 2
Indian Institute of Technology Madras via Swayam
Image Compression and Generation using Variational Autoencoders in Python
Coursera Project Network via Coursera
Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera
Generative Models
Serrano.Academy via YouTube
NVAE- A Deep Hierarchical Variational Autoencoder
Yannic Kilcher via YouTube