Understanding ChatGPT: From Machine Learning to Language Models
Offered By: John Savill's Technical Training via YouTube
Course Description
Overview
Dive into a comprehensive 1 hour 25 minute video lecture exploring the foundations of ChatGPT, machine learning, and neural networks. Begin with an introduction to the history and training of AI models, then progress through key concepts like prediction, complex curve combinations, weights and biases, activation functions, and back propagation. Examine the evolution of GPT models, including GPT-2 and GPT-3, and understand crucial elements such as attention mechanisms and tokenization. Conclude with an in-depth look at ChatGPT itself and final thoughts on its implications. Access additional resources, including a whiteboard visualization, research papers, and links to further learning materials on Azure, DevOps, and PowerShell.
Syllabus
- Introduction
- History and training
- Machine learning
- Prediction
- More complex predictions
- Making any shape with enough curves combined
- How to get different curves from one curve
- Weights and bias
- Activation functions
- Back propagation and gradient descent
- Visualization of layers working together
- What is GPT
- Attention
- Tokens
- GPT-2
- GPT-3
- zero, one and few shot
- What was it trained on
- ChatGPT
- Closing thoughts
Taught by
John Savill's Technical Training
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent