Where Did My Model Go Wrong? Toolkits of Interpretability, Bias and Fairness to the Rescue
Offered By: Data Science Festival via YouTube
Course Description
Overview
Explore the challenges and solutions in machine learning model development through this insightful conference talk from the Data Science Festival. Delve into the process of creating ML models, identifying potential biases and errors at various stages of development. Learn about techniques to correct these issues and discover methods for interpreting ML models and verifying fair outcomes. Gain valuable knowledge on toolkits for interpretability, bias detection, and fairness assessment in machine learning. This 38-minute presentation, delivered at the 2023 MayDay Data Science Festival in London, offers practical insights for data scientists and ML practitioners looking to improve their model development processes and ensure more reliable, unbiased results.
Syllabus
Where did my model go wrong? Toolkits of Interpretability, Bias and Fairness to the rescue
Taught by
Data Science Festival
Related Courses
Data Science in Action - Building a Predictive Churn ModelSAP Learning Applied Data Science Capstone
IBM via Coursera Data Modeling and Regression Analysis in Business
University of Illinois at Urbana-Champaign via Coursera Introduction to Predictive Analytics using Python
University of Edinburgh via edX Machine Learning con Python. Nivel intermedio
Coursera Project Network via Coursera