Emergent Linguistic Structure in Deep Contextual Neural Word Representations - Chris Manning
Offered By: Institute for Advanced Study via YouTube
Course Description
Overview
Explore emergent linguistic structures in deep contextual neural word representations with Stanford University's Chris Manning in this 43-minute lecture from the Workshop on Theory of Deep Learning. Delve into language modeling, enlightenment era neural language models, and how they solve the curse of dimensionality. Examine recurrent models with self-attention, masked sequence models, and the SQUAD Question. Discover what BERT attention heads do, including a coreference head, and learn how distance metrics unify trees and vectors. Gain insights into finding trees in vector spaces and advance your understanding of deep learning in natural language processing.
Syllabus
Intro
1. Language Modeling
Enlightenment era neural language models (NLMs) 1. Solve curse of dimensionality by sharing of statistical strength via
Recurrent models with (self-)attention
Self-attention in masked sequence model
SQUAD Question
What do BERT attention heads do?
There's a coreference head (!)
Distance metrics unify trees and vectors
Finding trees in vector spaces
Taught by
Institute for Advanced Study
Related Courses
Dimensionality Reduction in PythonDataCamp Sparse Nonlinear Dynamics Models with SINDy - The Library of Candidate Nonlinearities
Steve Brunton via YouTube Overcoming the Curse of Dimensionality and Mode Collapse - Ke Li
Institute for Advanced Study via YouTube Multilevel Weighted Least Squares Polynomial Approximation – Sören Wolfers, KAUST
Alan Turing Institute via YouTube Score Estimation with Infinite-Dimensional Exponential Families – Dougal Sutherland, UCL
Alan Turing Institute via YouTube