YoVDO

Transfer Learning and Transformers - Full Stack Deep Learning - Spring 2021

Offered By: The Full Stack via YouTube

Tags

Deep Learning Courses Computer Vision Courses Transfer Learning Courses Transformers Courses Attention Mechanisms Courses Embeddings Courses Language Models Courses Positional Encoding Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the evolution of transfer learning in this 49-minute lecture from the Full Stack Deep Learning Spring 2021 series. Dive into the origins of transfer learning in computer vision and its application to Natural Language Processing through embeddings. Discover NLP's breakthrough moment with ELMO and ULMFit, and their impact on datasets like SQuAD, SNLI, and GLUE. Delve into the rise of Transformers, understanding key concepts such as masked self-attention, positional encoding, and layer normalization. Examine various Transformer variants including BERT, GPT series, DistillBERT, and T5. Witness impressive GPT-3 demonstrations and gain insights into future directions in the field of transfer learning and transformers.

Syllabus

- Introduction
- Transfer Learning in Computer Vision
- Embeddings and Language Models
- NLP's ImageNet moment: ELMO and ULMFit on datasets like SQuAD, SNLI, and GLUE
- Rise of Transformers
- Attention in Detail: Masked Self-Attention, Positional Encoding, and Layer Normalization
- Transformers Variants: BERT, GPT/GPT-2/GPT-3, DistillBERT, T5, etc.
- GPT3 Demos
- Future Directions


Taught by

The Full Stack

Related Courses

Deep Learning for Natural Language Processing
University of Oxford via Independent
Sequence Models
DeepLearning.AI via Coursera
Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam
Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam
Deep Learning - IIT Ropar
Indian Institute of Technology, Ropar via Swayam