Pragmatic Interpretability - A Human-AI Cooperation Approach
Offered By: USC Information Sciences Institute via YouTube
Course Description
Overview
Explore the concept of pragmatic interpretability in machine learning models through this insightful 53-minute talk by Shi Feng from the University of Illinois, Chicago. Delve into the challenges of understanding how AI models work and their potential for intelligence augmentation. Examine a more practical approach to interpretability that emphasizes modeling human needs in AI cooperation. Learn about evaluating and optimizing human-AI teams as unified decision-makers, and discover how models can learn to explain selectively. Investigate methods for incorporating human intuition into models and explanations outside the context of working with AI. Conclude with a discussion on how models can pragmatically infer information about their human teammates. Gain valuable insights from Shi Feng, a postdoctoral researcher at the University of Chicago, whose work focuses on human-AI cooperation in natural language processing.
Syllabus
Pragmatic Interpretability
Taught by
USC Information Sciences Institute
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube