Transformers: Text Classification for NLP Using BERT
Offered By: LinkedIn Learning
Course Description
Overview
Learn about transformers, the go-to architecture for NLP and computer vision tasks.
Syllabus
Introduction
- Natural language processing with transformers
- How to use the exercise files
- How transformers are used in NLP
- Transformers in production
- Transformers history
- Challenge: BERT model sizes
- Solution: BERT model sizes
- Bias in BERT
- How was BERT trained?
- Transfer learning
- Transformer: Architecture overview
- BERT model and tokenization
- Positional encodings and segment embeddings
- Tokenizers
- Self-attention
- Multi-head attention and feedforward network
- BERT and text classification
- The Datasets library
- Overview of IMDb dataset
- Using a tokenizer
- Tiny IMDb
- A training run
- Additional training runs
Taught by
Jonathan Fernandes
Related Courses
AWS Flash - Navigating the Large Language Models Landscape (Simplified Chinese)Amazon Web Services via AWS Skill Builder AWS Flash - Navigating the Large Language Models Landscape (Simplified Chinese) (中文讲师定制版)
Amazon Web Services via AWS Skill Builder Generative AI Language Modeling with Transformers
IBM via Coursera The Rise of Generative AI
Board Infinity via Coursera Introduction to LLMs in Python
DataCamp