Large Language Models - Will They Keep Getting Bigger?
Offered By: Massachusetts Institute of Technology via YouTube
Course Description
Overview
Syllabus
Introduction
What are language models
Modern NLP
Scaling
sparse models
Gshard
Base Layers
Formal Optimization
Algorithmic Optimization
Experiments
Comparison
Benefits
Dmxlayers
Representations
Simple routing
Training time
Parallel training
Data curation
Unrealistic setting
Domain structure
Inference procedure
Perplexity numbers
Modularity
Remove experts
Summary
Generic language models
Hot dog example
Hot pan example
Common sense example
Large language models
The fundamental challenge
Surface form competition
Flip the reasoning
Key intuition
Noisey channel models
Finetuning
Scoring Strings
Web Crawls
Example Output
Structure Data
Efficiency
Questions
Density estimation
Better training objectives
Optimization
Probability
Induction
multimodality
outliers
compute vs data
Taught by
MIT Embodied Intelligence
Tags
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube