YoVDO

BERT- Pre-training of Deep Bidirectional Transformers for Language Understanding

Offered By: Yannic Kilcher via YouTube

Tags

BERT Courses Deep Learning Courses

Course Description

Overview

Explore a comprehensive video analysis of the groundbreaking BERT language representation model, which revolutionized natural language processing tasks. Delve into the intricacies of bidirectional encoder representations from transformers, understanding how BERT's pre-training on both left and right context enables state-of-the-art performance across various language tasks. Examine the model's architecture, including attention mechanisms, masked language modeling, and pre-trained language modeling. Compare BERT to other models, discuss its limitations, and learn how it achieves remarkable improvements in question answering, language inference, and other NLP benchmarks. Gain insights into the work of Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, and discover how BERT's conceptually simple yet empirically powerful approach has pushed the boundaries of language understanding.

Syllabus

Introduction
Paper Introduction
Model Comparison
Attention Based Model
Key and Value
Attention
BERT Limitations
Masked Language Modeling
Pretrained Language Modeling
Language Processing Tasks


Taught by

Yannic Kilcher

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX