YoVDO

Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Unsupervised Learning Courses Reinforcement Learning Courses Hidden Markov Models Courses

Course Description

Overview

Explore unsupervised learning of structure in natural language processing through this comprehensive lecture from CMU's Neural Networks for NLP course. Delve into the differences between learning features and learning structure, various unsupervised learning methods, and key design decisions for unsupervised models. Examine real-world examples of unsupervised learning, including hidden Markov models, CRF autoencoders, and dependency induction with neural networks. Gain insights into advanced topics such as learning with reinforcement learning, phrase structure vs. dependency structure, and learning language-level features. Access accompanying slides and related course materials to enhance your understanding of this complex subject in computational linguistics and machine learning.

Syllabus

Supervised, Unsupervised, Semi-supervised
Learning Features vs. Learning Discrete Structure
Unsupervised Feature Learning (Review)
How do we Use Learned Features?
What About Discrete Structure?
A Simple First Attempt
Unsupervised Hidden Markov Models • Change label states to unlabeled numbers
Hidden Markov Models w/ Gaussian Emissions • Instead of parameterizing each state with a categorical distribution, we can use a Gaussian (or Gaussian modure)!
Featurized Hidden Markov Models (Tran et al. 2016) • Calculate the transition emission probabilities with neural networks! • Emission: Calculate representation of each word in vocabulary w
CRF Autoencoders (Ammar et al. 2014)
Soft vs. Hard Tree Structure
One Other Paradigm: Weak Supervision
Gated Convolution (Cho et al. 2014)
Learning with RL (Yogatama et al. 2016)
Phrase Structure vs. Dependency Structure
Dependency Model w/ Valence (Klein and Manning 2004)
Unsupervised Dependency Induction w/ Neural Nets (Jiang et al. 2016)
Learning Dependency Heads w/ Attention (Kuncoro et al. 2017)
Learning Segmentations w/ Reconstruction Loss (Elsner and Shain 2017)
Learning Language-level Features (Malaviya et al. 2017) • All previous work learned features of a single sentence


Taught by

Graham Neubig

Related Courses

Computational Neuroscience
University of Washington via Coursera
Reinforcement Learning
Brown University via Udacity
Reinforcement Learning
Indian Institute of Technology Madras via Swayam
FA17: Machine Learning
Georgia Institute of Technology via edX
Introduction to Reinforcement Learning
Higher School of Economics via Coursera