Generative AI: Working with Large Language Models
Offered By: LinkedIn Learning
Course Description
Overview
Explore a user-friendly approach to working with transformers and large language models for natural language processing.
Syllabus
Introduction
- Learning about Large Language Models
- What are large language models?
- Transformers in production
- Transformers: History
- Transfer learning
- Transformer: Architecture overview
- Self-attention
- Multi-head attention and Feed Forward Network
- GPT-3
- GPT-3 use cases
- Challenges and shortcomings of GPT-3
- GLaM
- Megatron-Turing NLG Model
- Gopher
- Scaling laws
- Chinchilla
- BIG-bench
- PaLM
- OPT and BLOOM
- Going further with Transformers
Taught by
Jonathan Fernandes
Related Courses
Transformers: Text Classification for NLP Using BERTLinkedIn Learning TensorFlow: Working with NLP
LinkedIn Learning TransGAN - Two Transformers Can Make One Strong GAN - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube Nyströmformer- A Nyström-Based Algorithm for Approximating Self-Attention
Yannic Kilcher via YouTube Recreate Google Translate - Model Training
Edan Meyer via YouTube