Pragmatic Interpretability - A Human-AI Cooperation Approach
Offered By: USC Information Sciences Institute via YouTube
Course Description
Overview
Explore the concept of pragmatic interpretability in machine learning models through this insightful 53-minute talk by Shi Feng from the University of Illinois, Chicago. Delve into the challenges of understanding how AI models work and their potential for intelligence augmentation. Examine a more practical approach to interpretability that emphasizes modeling human needs in AI cooperation. Learn about evaluating and optimizing human-AI teams as unified decision-makers, and discover how models can learn to explain selectively. Investigate methods for incorporating human intuition into models and explanations outside the context of working with AI. Conclude with a discussion on how models can pragmatically infer information about their human teammates. Gain valuable insights from Shi Feng, a postdoctoral researcher at the University of Chicago, whose work focuses on human-AI cooperation in natural language processing.
Syllabus
Pragmatic Interpretability
Taught by
USC Information Sciences Institute
Related Courses
Explainable AI: Scene Classification and GradCam VisualizationCoursera Project Network via Coursera Artificial Intelligence Privacy and Convenience
LearnQuest via Coursera Natural Language Processing and Capstone Assignment
University of California, Irvine via Coursera Modern Artificial Intelligence Masterclass: Build 6 Projects
Udemy Data Science for Business
DataCamp