YoVDO

CMU Multilingual NLP - Machine Translation-Sequence-to-Sequence Models

Offered By: Graham Neubig via YouTube

Tags

Natural Language Processing (NLP) Courses Deep Learning Courses Transformers Courses Attention Mechanisms Courses Sequence to Sequence Models Courses Self-Attention Courses

Course Description

Overview

Explore machine translation and sequence-to-sequence models in this 44-minute lecture from CMU's Multilingual Natural Language Processing course. Delve into conditional language modeling, simple sequence-to-sequence models, generation methods, attention mechanisms, and self-attention/transformers. Learn about calculating sentence probabilities, hidden state passing, and various generation techniques including ancestral sampling, greedy search, and beam search. Examine sentence representations, attention score functions, and the distinction between attention and alignment. Discover multi-headed attention, supervised training approaches, and the intricacies of self-attention in Transformer models. Gain insights into Transformer training tricks, masking techniques for efficient training, and a unified view of sequence-to-sequence models. Conclude with a code walkthrough to solidify understanding of these advanced NLP concepts.

Syllabus

Intro
Language Models • Language models are generative models of text
Conditioned Language Models
Calculating the Probability of a Sentence
Conditional Language Models
One Type of Language Model Mikolov et al. 2011
How to Pass Hidden State?
The Generation Problem
Ancestral Sampling
Greedy Search
Beam Search
Sentence Representations
Calculating Attention (1)
A Graphical Example
Attention Score Functions (1)
Attention is not Alignment! (Koehn and Knowles 2017)
Coverage
Multi-headed Attention
Supervised Training (Liu et al. 2016)
Self Attention (Cheng et al. 2016) • Each element in the sentence attends to other
Why Self Attention?
Transformer Attention Tricks
Transformer Training Tricks
Masking for Training . We want to perform training in as few operations as possible using big matrix multiplies
A Unified View of Sequence- to-sequence Models
Code Walk


Taught by

Graham Neubig

Related Courses

Generative AI Language Modeling with Transformers
IBM via Coursera
Transformer Models and BERT Model - Deutsch
Google Cloud via Coursera
Generative AI: Introduction to Large Language Models
LinkedIn Learning
Generative AI: Working with Large Language Models
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning