YoVDO

ROME - Locating and Editing Factual Associations in GPT - Paper Explained & Author Interview

Offered By: Yannic Kilcher via YouTube

Tags

ChatGPT Courses Artificial Intelligence Courses Language Models Courses

Course Description

Overview

Explore an in-depth analysis of how large language models store and recall factual associations in this comprehensive video lecture. Delve into the mechanisms behind GPT models' ability to store vast amounts of world knowledge and learn about a proposed method for targeted editing of such facts. Discover how causal tracing reveals where information is stored within the model, the importance of MLPs in this process, and how to edit language model knowledge with precision. Examine experimental evaluations, including the CounterFact benchmark, and consider the implications for understanding model inner workings and gaining greater control over AI systems. Gain insights into cutting-edge research on model editing and the nature of knowledge representation in artificial intelligence.

Syllabus

- Introduction
- What are the main questions in this subfield?
- How causal tracing reveals where facts are stored
- Clever experiments show the importance of MLPs
- How do MLPs store information?
- How to edit language model knowledge with precision?
- What does it mean to know something?
- Experimental Evaluation & the CounterFact benchmark
- How to obtain the required latent representations?
- Where is the best location in the model to perform edits?
- What do these models understand about language?
- Questions for the community


Taught by

Yannic Kilcher

Related Courses

ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique
Generating New Recipes using GPT-2
Coursera Project Network via Coursera
Deep Learning NLP: Training GPT-2 from scratch
Coursera Project Network via Coursera
Data Science A-Z: Hands-On Exercises & ChatGPT Prize [2024]
Udemy
Deep Learning A-Z 2024: Neural Networks, AI & ChatGPT Prize
Udemy