YoVDO

BERT- Pre-training of Deep Bidirectional Transformers for Language Understanding

Offered By: Yannic Kilcher via YouTube

Tags

BERT Courses Deep Learning Courses

Course Description

Overview

Explore a comprehensive video analysis of the groundbreaking BERT language representation model, which revolutionized natural language processing tasks. Delve into the intricacies of bidirectional encoder representations from transformers, understanding how BERT's pre-training on both left and right context enables state-of-the-art performance across various language tasks. Examine the model's architecture, including attention mechanisms, masked language modeling, and pre-trained language modeling. Compare BERT to other models, discuss its limitations, and learn how it achieves remarkable improvements in question answering, language inference, and other NLP benchmarks. Gain insights into the work of Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, and discover how BERT's conceptually simple yet empirically powerful approach has pushed the boundaries of language understanding.

Syllabus

Introduction
Paper Introduction
Model Comparison
Attention Based Model
Key and Value
Attention
BERT Limitations
Masked Language Modeling
Pretrained Language Modeling
Language Processing Tasks


Taught by

Yannic Kilcher

Related Courses

Sentiment Analysis with Deep Learning using BERT
Coursera Project Network via Coursera
Natural Language Processing with Attention Models
DeepLearning.AI via Coursera
Fine Tune BERT for Text Classification with TensorFlow
Coursera Project Network via Coursera
Deploy a BERT question answering bot on Django
Coursera Project Network via Coursera
Generating discrete sequences: language and music
Ural Federal University via edX