Where Did My Model Go Wrong? Toolkits of Interpretability, Bias and Fairness to the Rescue
Offered By: Data Science Festival via YouTube
Course Description
Overview
Explore the challenges and solutions in machine learning model development through this insightful conference talk from the Data Science Festival. Delve into the process of creating ML models, identifying potential biases and errors at various stages of development. Learn about techniques to correct these issues and discover methods for interpreting ML models and verifying fair outcomes. Gain valuable knowledge on toolkits for interpretability, bias detection, and fairness assessment in machine learning. This 38-minute presentation, delivered at the 2023 MayDay Data Science Festival in London, offers practical insights for data scientists and ML practitioners looking to improve their model development processes and ensure more reliable, unbiased results.
Syllabus
Where did my model go wrong? Toolkits of Interpretability, Bias and Fairness to the rescue
Taught by
Data Science Festival
Related Courses
Machine Learning Interpretable: interpretML y LIMECoursera Project Network via Coursera Machine Learning Interpretable: SHAP, PDP y permutacion
Coursera Project Network via Coursera Evaluating Model Effectiveness in Microsoft Azure
Pluralsight MIT Deep Learning in Life Sciences Spring 2020
Massachusetts Institute of Technology via YouTube Applied Data Science Ethics
statistics.com via edX