How to Interpret and Explain Black Box Models
Offered By: Data Council via YouTube
Course Description
Overview
Explore techniques for interpreting and explaining black box machine learning models in this 31-minute conference talk from Data Council. Gain a high-level overview of popular model explanation techniques, including explainable boosting machine, visual analytics, distillation, prototypes, saliency map, counterfactual, feature visualization, LIME, SHAP, interpretML, and TCAV. Learn from Sophia Yang, a Senior Data Scientist and Developer Advocate at Anaconda, as she shares insights on increasing model interpretability and explainability. Discover how these techniques can enhance understanding of complex machine learning models and their decision-making processes. Benefit from Yang's expertise in data science and her contributions to the Python open-source community through various libraries. Expand your knowledge of model interpretation methods to improve transparency and trust in your machine learning projects.
Syllabus
How to Interpret & Explain Your Black Box Models | Anaconda
Taught by
Data Council
Related Courses
Explainable Machine Learning with LIME and H2O in RCoursera Project Network via Coursera Machine Learning Interpretable: interpretML y LIME
Coursera Project Network via Coursera Capstone Assignment - CDSS 5
University of Glasgow via Coursera Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
LinkedIn Learning Guided Project: Predict World Cup Soccer Results with ML
IBM via edX