Explaining Model Decisions and Fixing Them Through Human Feedback
Offered By: Stanford University via YouTube
Course Description
Overview
Syllabus
Intro
Interpretability in different stages of Al evolution
Approaches for visual explanations
Visualize any decision
Visualizing Image Captioning models
Visualizing Visual Question Answering models
Analyzing Failure modes
Grad-CAM for predicting patient outcomes
Extensions to Multi-modal Transformer based Architectures
Desirable properties of Visual Explanations
Equalizer
Biases in Vision and Language models
Human Importance-aware Network Tuning (HINT)
Contrastive Self-Supervised Learning (SSL)
Why SSL methods fail to generalize to arbitrary images?
Does improved SSL grounding transfer to downstream tasks?
CAST makes models resilient to background changes
VQA for visually impaired users
Sub-Question Importance-aware Network Tuning
Explaining Model Decisions and Fixing them via Human Feedback
Grad-CAM for multi-modal transformers
Taught by
Stanford MedAI
Tags
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube