ROME - Locating and Editing Factual Associations in GPT - Paper Explained & Author Interview
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore an in-depth analysis of how large language models store and recall factual associations in this comprehensive video lecture. Delve into the mechanisms behind GPT models' ability to store vast amounts of world knowledge and learn about a proposed method for targeted editing of such facts. Discover how causal tracing reveals where information is stored within the model, the importance of MLPs in this process, and how to edit language model knowledge with precision. Examine experimental evaluations, including the CounterFact benchmark, and consider the implications for understanding model inner workings and gaining greater control over AI systems. Gain insights into cutting-edge research on model editing and the nature of knowledge representation in artificial intelligence.
Syllabus
- Introduction
- What are the main questions in this subfield?
- How causal tracing reveals where facts are stored
- Clever experiments show the importance of MLPs
- How do MLPs store information?
- How to edit language model knowledge with precision?
- What does it mean to know something?
- Experimental Evaluation & the CounterFact benchmark
- How to obtain the required latent representations?
- Where is the best location in the model to perform edits?
- What do these models understand about language?
- Questions for the community
Taught by
Yannic Kilcher
Related Courses
Microsoft Bot Framework and Conversation as a PlatformMicrosoft via edX Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube Improving Customer Experiences with Speech to Text and Text to Speech
Microsoft via YouTube Stanford Seminar - Deep Learning in Speech Recognition
Stanford University via YouTube Select Topics in Python: Natural Language Processing
Codio via Coursera