Neural Nets for NLP - Models with Latent Random Variables
Offered By: Graham Neubig via YouTube
Course Description
Overview
Syllabus
Intro
Discriminative vs. Generative Models
Quiz: What Types of Variables?
What is Latent Random Variable Model
Why Latent Variable Models?
Deep Structured Latent Variable Models • Specify structure, but interpretable structure is often discrete e.g. POS tags, dependency parse trees
Examples of Deep Latent Variable Models
A probabilistic perspective on Variational Auto-Encoder
What is Our Loss Function?
Practice
Variational Inference • Variational inference approximates the true posterior poll with a family of distributions
Variational Inference • Variational inference approximates the true posterior polar with a family of distributions
Variational Auto-Encoders
Variational Autoencoders
Learning VAE
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Difficulties in Training . Of the two components in the VAE objective, the KL divergence term is much easier to learn
Solution 3
Weaken the Decoder
Discrete Latent Variables?
Method 1: Enumeration
Solution 4
Taught by
Graham Neubig
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX