Rational Recurrences for Empirical Natural Language Processing
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore rational recurrences in empirical natural language processing through this 56-minute lecture by Noah Smith from the University of Washington. Delve into the family of recurrent neural networks that follow specific rules for calculating hidden states, corresponding to parallelized weighted finite-state pattern matching. Discover how this approach aims to create more understandable deep learning models for NLP without sacrificing accuracy. Learn about the weighted finite-state view and its applications in deriving new models. Examine various experiments, interpretability aspects, and the benefits of sparse Lasso in this context. Gain insights into the historical background, soft patterns, Simple Recurrent Units, and the concept of "unigram" and "bigram" models in rational recurrences. Understand the potential of this approach for creating more explainable yet powerful NLP models.
Syllabus
Intro
A Bit of History
Outline
Weighted Patterns
Soft Patterns (SoPa)
Two-SoPa Recurrent Neural Network
Experiments
Interpretability (Negative Patterns)
Summary So Far
Simple Recurrent Unit (Lei et al., 2017)
Rational Recurrences and Others
"Unigram" and "Bigram" Models
Interpolation
Sparsity and Structured Sparsity
Benefit of Sparse Lasso
Procedure
Baselines
Visualization
Parting Shots
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Miracles of Human Language: An Introduction to LinguisticsLeiden University via Coursera Language and Mind
Indian Institute of Technology Madras via Swayam Text Analytics with Python
University of Canterbury via edX Playing With Language
TED-Ed via YouTube Computational Language: A New Kind of Science
World Science U