Transfer Learning and Transformers - Full Stack Deep Learning - Spring 2021
Offered By: The Full Stack via YouTube
Course Description
Overview
Explore the evolution of transfer learning in this 49-minute lecture from the Full Stack Deep Learning Spring 2021 series. Dive into the origins of transfer learning in computer vision and its application to Natural Language Processing through embeddings. Discover NLP's breakthrough moment with ELMO and ULMFit, and their impact on datasets like SQuAD, SNLI, and GLUE. Delve into the rise of Transformers, understanding key concepts such as masked self-attention, positional encoding, and layer normalization. Examine various Transformer variants including BERT, GPT series, DistillBERT, and T5. Witness impressive GPT-3 demonstrations and gain insights into future directions in the field of transfer learning and transformers.
Syllabus
- Introduction
- Transfer Learning in Computer Vision
- Embeddings and Language Models
- NLP's ImageNet moment: ELMO and ULMFit on datasets like SQuAD, SNLI, and GLUE
- Rise of Transformers
- Attention in Detail: Masked Self-Attention, Positional Encoding, and Layer Normalization
- Transformers Variants: BERT, GPT/GPT-2/GPT-3, DistillBERT, T5, etc.
- GPT3 Demos
- Future Directions
Taught by
The Full Stack
Related Courses
Structuring Machine Learning ProjectsDeepLearning.AI via Coursera Natural Language Processing on Google Cloud
Google Cloud via Coursera Introduction to Learning Transfer and Life Long Learning (3L)
University of California, Irvine via Coursera Advanced Deployment Scenarios with TensorFlow
DeepLearning.AI via Coursera Neural Style Transfer with TensorFlow
Coursera Project Network via Coursera