YoVDO

Neural Nets for NLP 2021 - Attention

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Attention Mechanisms Courses Transformer Architecture Courses

Course Description

Overview

Learn about attention mechanisms in neural networks for natural language processing in this comprehensive lecture from CMU's Neural Networks for NLP course. Explore the "Attention is All You Need" paper, improvements to attention techniques, specialized attention varieties, and what neural networks actually attend to. Dive into topics like sentence representations, attention score functions, multi-headed attention, training tricks, and applications to various modalities. Gain insights on incorporating Markov properties, coverage, dictionary probabilities, and handling multiple sources in attention-based models.

Syllabus

Intro
Sentence Representations
Calculating Attention (1)
A Graphical Example
Attention Score Functions (1)
Attention Score Functions (2)
Multi-headed Attention
Attention Tricks
Summary of the Transformer
Training Tricks
Masking for Training
Incorporating Markov Properties
Coverage
Input Sentence: Copy
Dictionary Probabilities
Previously Generated Things
Various Modalities
Multiple Sources


Taught by

Graham Neubig

Related Courses

Artificial Intelligence Foundations: Neural Networks
LinkedIn Learning
Transformers: Text Classification for NLP Using BERT
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
BERTによる自然言語処理を学ぼう! -Attention、TransformerからBERTへとつながるNLP技術-
Udemy
Complete Natural Language Processing Tutorial in Python
Keith Galli via YouTube