YoVDO

Neural Nets for NLP 2020 - Attention

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Machine Learning Courses Natural Language Processing (NLP) Courses Attention Mechanisms Courses

Course Description

Overview

Explore attention mechanisms in neural networks for natural language processing in this comprehensive lecture from CMU's Neural Networks for NLP course. Delve into various aspects of attention, including what to attend to, improvements to attention techniques, and specialized attention varieties. Examine a case study on the "Attention is All You Need" paper, and learn about attention score functions, input sentence handling, multi-headed attention, and training tricks. Gain insights into incorporating Markov properties, supervised training for attention, and hard attention concepts. This in-depth presentation covers essential topics for understanding and implementing attention in NLP models.

Syllabus

Intro
Sentence Representations
Calculating Attention (1)
A Graphical Example
Attention Score Functions (1)
Attention Score Functions (2)
Input Sentence: Copy
Input Sentence: Bias . If you have a translation dictionary, use it to bias outputs (Arthur et al. 2016)
Previously Generated Things
Various Modalities
Multiple Sources
Coverage
Incorporating Markov Properties (Cohn et al. 2015)
Supervised Training (Mi et al. 2016)
Hard Attention
Multi-headed Attention
Attention Tricks
Training Tricks


Taught by

Graham Neubig

Related Courses

Deep Learning for Natural Language Processing
University of Oxford via Independent
Sequence Models
DeepLearning.AI via Coursera
Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam
Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam
Deep Learning - IIT Ropar
Indian Institute of Technology, Ropar via Swayam