Model Distillation for Faithful Explanations of Medical Code Predictions
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore knowledge distillation techniques for generating faithful and plausible explanations in machine learning models, particularly in clinical medicine and high-risk settings. Delve into Isabel Cachola's research from Johns Hopkins University's Center for Language & Speech Processing, focusing on improving interpretability of models with excellent predictive performance. Learn how this approach can support integrated human-machine decision-making and increase trust in model predictions among domain experts. Examine the application of these techniques to medical code predictions, based on the paper presented at the BioNLP 2022 workshop.
Syllabus
Model Distillation for Faithful Explanations of Medical Code Predictions -- Isabel Cachola (JHU)
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Machine Learning Interpretable: interpretML y LIMECoursera Project Network via Coursera Machine Learning Interpretable: SHAP, PDP y permutacion
Coursera Project Network via Coursera Evaluating Model Effectiveness in Microsoft Azure
Pluralsight MIT Deep Learning in Life Sciences Spring 2020
Massachusetts Institute of Technology via YouTube Applied Data Science Ethics
statistics.com via edX