YoVDO

LLM Foundations - LLM Bootcamp

Offered By: The Full Stack via YouTube

Tags

LLM (Large Language Model) Courses Machine Learning Courses BERT Courses T5 Courses Transformer Architecture Courses Positional Encoding Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into a comprehensive 48-minute video lecture on the foundational concepts of large language models. Explore core machine learning principles, the Transformer architecture, and notable LLMs. Gain insights into pretraining dataset composition, including the importance of code in training data. Learn about key components such as input embedding, masked multi-head attention, positional encoding, and feed-forward layers. Discover why Transformers work so well and examine notable models like BERT, T5, GPT, Chinchilla, LLaMA, and RETRO. Understand the significance of scaling laws and instruction tuning in LLM development. Access accompanying slides and additional resources for a deeper understanding of this rapidly evolving field.

Syllabus

Intro
Foundations of Machine Learning
The Transformer Architecture
Transformer Decoder Overview
Inputs
Input Embedding
Masked Multi-Head Attention
Positional Encoding
Skip Connections and Layer Norm
Feed-forward Layer
Transformer hyperparameters and Why they work so well
Notable LLM: BERT
Notable LLM: T5
Notable LLM: GPT
Notable LLM: Chinchilla and Scaling Laws
Notable LLM: LLaMA
Why include code in LLM training data?
Instruction Tuning
Notable LLM: RETRO


Taught by

The Full Stack

Related Courses

Artificial Intelligence Foundations: Neural Networks
LinkedIn Learning
Transformers: Text Classification for NLP Using BERT
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
BERTによる自然言語処理を学ぼう! -Attention、TransformerからBERTへとつながるNLP技術-
Udemy
Complete Natural Language Processing Tutorial in Python
Keith Galli via YouTube