Inside ChatGPT- Unveiling the Training Process of OpenAI's Language Model
Offered By: Krish Naik via YouTube
Course Description
Overview
Delve into the intricate training process of OpenAI's language model in this 21-minute video. Explore the three stages of ChatGPT's training: Generative Pretraining, Supervised Fine Tuning, and Reinforcement Learning Through Human Feedbacks. Gain insights into how this large language model learns to generate text, translate languages, create diverse content, and provide informative answers. Discover the massive dataset of text and code used for training and understand how ChatGPT adapts to new information. Follow along with timestamps for each key section, including an introduction and detailed explanations of each training stage.
Syllabus
Introduction
3 stages of training
Generative Pretraining
Supervised Fine Tuning
Reinforcement Learning Through Human Feedbacks
Taught by
Krish Naik
Related Courses
Big Self-Supervised Models Are Strong Semi-Supervised LearnersYannic Kilcher via YouTube A Transformer-Based Framework for Multivariate Time Series Representation Learning
Launchpad via YouTube Fine Tune GPT-3.5 Turbo
Data Science Dojo via YouTube Yi 34B: The Rise of Powerful Mid-Sized Models - Base, 200k, and Chat
Sam Witteveen via YouTube LLaMA 2 and Meta AI Projects - Interview with Thomas Scialom
Aleksa Gordić - The AI Epiphany via YouTube