YoVDO

Neural Nets for NLP 2017 - Transition-Based Dependency Parsing

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Feature Extraction Courses

Course Description

Overview

Explore transition-based dependency parsing in this comprehensive lecture from CMU's Neural Networks for NLP course. Delve into the fundamentals of transition-based parsing, including shift-reduce parsing with feed-forward networks and stack LSTMs. Learn about a simple alternative approach using linearized trees. Gain insights into various parsing techniques, feature extraction methods, and the importance of tree structures in natural language processing. Examine practical code examples and follow along with detailed slides to reinforce your understanding of these advanced NLP concepts.

Syllabus

Intro
Two Types of Linguistic Structure
Arc Standard Shift-Reduce Parsing (Yamada & Matsumoto 2003, Nivre 2003)
Shift Reduce Example
Classification for Shift-reduce
Making Classification Decisions
What Features to Extract?
Non-linear Function: Cube Function
Why Tree Structure?
Tree-structured LSTM (Tai et al. 2015)
Encoding Parsing Configurations w/ RNNS
A Simple Approximation: Linearized Trees (Vinyals et al. 2015)
Recursive Neural Networks (Socher et al. 2011)


Taught by

Graham Neubig

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX