GPT-2: Language Models are Unsupervised Multitask Learners
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore OpenAI's groundbreaking GPT-2 language model and the controversy surrounding its release in this 28-minute video analysis. Delve into the model's ability to perform various natural language processing tasks without explicit supervision, including question answering, machine translation, reading comprehension, and summarization. Examine how GPT-2, trained on the massive WebText dataset, achieves state-of-the-art results on multiple language modeling benchmarks in a zero-shot setting. Discover the potential implications of this technology for building more advanced language processing systems that learn from naturally occurring demonstrations, while considering the ethical concerns and debates sparked by its development.
Syllabus
GPT-2: Language Models are Unsupervised Multitask Learners
Taught by
Yannic Kilcher
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent