YoVDO

Model Distillation for Faithful Explanations of Medical Code Predictions

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Machine Learning Courses Healthcare Informatics Courses Predictive Modeling Courses Explainable AI Courses Model Interpretability Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore knowledge distillation techniques for generating faithful and plausible explanations in machine learning models, particularly in clinical medicine and high-risk settings. Delve into Isabel Cachola's research from Johns Hopkins University's Center for Language & Speech Processing, focusing on improving interpretability of models with excellent predictive performance. Learn how this approach can support integrated human-machine decision-making and increase trust in model predictions among domain experts. Examine the application of these techniques to medical code predictions, based on the paper presented at the BioNLP 2022 workshop.

Syllabus

Model Distillation for Faithful Explanations of Medical Code Predictions -- Isabel Cachola (JHU)


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Health Informatics on FHIR
Georgia Institute of Technology via Coursera
Interprofessional Healthcare Informatics
University of Minnesota via Coursera
Introduction to Informatics
Drexel University College of Computing & Informatics via Open Education by Blackboard
Case Studies in Personalized Medicine
Vanderbilt University via Coursera
Medicine in the Digital Age
Rice University via edX