Interpretable Active Learning
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore an insightful conference talk on interpretable active learning presented by Richard Phillips at FAT* 2018. Delve into the innovative approach that combines active learning with LIME (Local Interpretable Model-agnostic Explanations) to enhance model interpretability. Discover how this method can improve the efficiency and transparency of machine learning models. Learn about the formula behind this technique and its potential applications in various fields. Gain valuable insights from the presentation's conclusion and engage with the thought-provoking questions raised during the discussion.
Syllabus
Introduction
Title
Lime
Formula
Conclusion
Questions
Taught by
ACM FAccT Conference
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube