What Is ChatGPT Doing? Training Neural Networks - Episode 4
Offered By: Wolfram via YouTube
Course Description
Overview
Explore the inner workings of large language models, particularly ChatGPT, in this 15-minute video from Wolfram. Delve into the training process of neural networks, examining topics such as layer modification during training, fine-tuning techniques, and reinforcement learning. Learn about training examples, output analysis, and parameter selection. Investigate how adjustments are made over time and what happens when improvements stagnate. Gain valuable insights into the mechanics behind ChatGPT's functionality and effectiveness through this informative conversation.
Syllabus
Intro
What Happens to a Neural Net While Training?
Can We Change the Layers While Training?
What about Fine-Tuning?
Reinforcement Learning
Training Examples
What Does the Output Look Like?
Further Investigation
How Do We Decide on Parameters? And How Do We Adjust That over Time?
What Happens if It Doesn't Improve over Time?
Taught by
Wolfram
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX