Neural Nets for NLP 2021 - Document-Level Models
Offered By: Graham Neubig via YouTube
Course Description
Overview
Syllabus
Some NLP Tasks we've Handled
Some Connections to Tasks over Documents
Document Level Language Modeling
Remember: Modeling using Recurrent Networks
Simple: Infinitely Pass State
Separate Encoding for Coarse- grained Document Context
Self-attention/Transformers Across Sentences
Transformer-XL: Truncated BPTT+Transformer
Adaptive Span Transformers
Reformer: Efficient Adaptively Sparse Attention
How to Evaluate Document- level Models?
Document Problems: Entity Coreference
Mention(Noun Phrase) Detection
Components of a Coreference Model
Coreference Models:Instances
Mention Pair Models
Entity Models: Entity-Mention Models
Advantages of Neural Network Models for Coreference
End-to-End Neural Coreference (Span Model)
End-to-End Neural Coreference (Coreference Model)
Using Coreference in Neural Models
Discourse Parsing w/ Attention- based Hierarchical Neural Networks
Uses of Discourse Structure in Neural Models
Taught by
Graham Neubig
Related Courses
Transformers: Text Classification for NLP Using BERTLinkedIn Learning TensorFlow: Working with NLP
LinkedIn Learning TransGAN - Two Transformers Can Make One Strong GAN - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube Nyströmformer- A Nyström-Based Algorithm for Approximating Self-Attention
Yannic Kilcher via YouTube Recreate Google Translate - Model Training
Edan Meyer via YouTube