Analyzing GPT-2's Brain Development
Offered By: Wolfram via YouTube
Course Description
Overview
Explore the intricacies of GPT-2's brain development in this 27-minute Wolfram Student Podcast episode featuring Shriya Ramanan's project. Delve into the effects of zero-ing out specific tokens, manipulating change nodes and their weights, and adjusting temperature parameters to gain a deeper understanding of the GPT-2 model's structure. Learn about generating tokens, examining nodes, and drawing parallels with the human brain. This informative discussion covers various aspects of AI and machine learning, providing insights into the inner workings of language models through the lens of computational analysis.
Syllabus
Intro
Project Summary
Generating Tokens
Nodes
Zeroing out weights
The human brain
Temperature parameters
Conclusion
Taught by
Wolfram
Related Courses
Doing Clinical Research: Biostatistics with the Wolfram LanguageUniversity of Cape Town via Coursera Mathematica 11 Essential Training
LinkedIn Learning Computation and the Fundamental Theory of Physics
The Royal Institution via YouTube New in Wolfram Language and Mathematica 14: Wolfram U Webinar Series
Wolfram U System Modeling Study Group Sessions - Wolfram U
Wolfram U