Interpretability Agents: Automating and Scaling Model Interpretation
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore a lecture on Automated Interpretability Agents (AIAs) for scaling model interpretation. Discover how AIAs, built from language models with tools, can design and perform experiments to answer questions about models of interest. Learn about their ability to operationalize hypotheses as code, update based on observed model behavior, and reach human-level performance on various model understanding tasks. Gain insights into the potential of AIAs to automate and scale model interpretation, making intensive explanatory auditing more accessible to model users and providers. Understand how this research aims to create a richer, iterative, and modular interface for interpretability that can scale to large and complex models.
Syllabus
Interpretability Agents
Taught by
Simons Institute
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube