YoVDO

DeBERTa - Decoding-Enhanced BERT with Disentangled Attention

Offered By: Yannic Kilcher via YouTube

Tags

Machine Learning Courses Transformer Models Courses

Course Description

Overview

Explore a comprehensive video explanation of the DeBERTa (Decoding-enhanced BERT with Disentangled Attention) machine learning paper. Delve into the next iteration of BERT-style Self-Attention Transformer models, which surpasses RoBERTa in state-of-the-art performance on multiple NLP tasks. Learn about key improvements, including the disentangled attention mechanism and the use of relative positional encodings. Examine the model's architecture, efficiency in pretraining, and performance on downstream tasks. Follow along as the video breaks down complex concepts, presents experimental results, and discusses scaling up to 1.5 billion parameters. Gain insights into the paper's abstract, authors, and the model's impact on the SuperGLUE benchmark.

Syllabus

- Intro & Overview
- Position Encodings in Transformer's Attention Mechanism
- Disentangling Content & Position Information in Attention
- Disentangled Query & Key construction in the Attention Formula
- Efficient Relative Position Encodings
- Enhanced Mask Decoder using Absolute Position Encodings
- My Criticism of EMD
- Experimental Results
- Scaling up to 1.5 Billion Parameters
- Conclusion & Comments


Taught by

Yannic Kilcher

Related Courses

Sequence Models
DeepLearning.AI via Coursera
Modern Natural Language Processing in Python
Udemy
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
Long Form Question Answering in Haystack
James Briggs via YouTube
Spotify's Podcast Search Explained
James Briggs via YouTube