Where Did My Model Go Wrong? Toolkits of Interpretability, Bias and Fairness to the Rescue
Offered By: Data Science Festival via YouTube
Course Description
Overview
Explore the challenges and solutions in machine learning model development through this insightful conference talk from the Data Science Festival. Delve into the process of creating ML models, identifying potential biases and errors at various stages of development. Learn about techniques to correct these issues and discover methods for interpreting ML models and verifying fair outcomes. Gain valuable knowledge on toolkits for interpretability, bias detection, and fairness assessment in machine learning. This 38-minute presentation, delivered at the 2023 MayDay Data Science Festival in London, offers practical insights for data scientists and ML practitioners looking to improve their model development processes and ensure more reliable, unbiased results.
Syllabus
Where did my model go wrong? Toolkits of Interpretability, Bias and Fairness to the rescue
Taught by
Data Science Festival
Related Courses
Data AnalysisJohns Hopkins University via Coursera Computing for Data Analysis
Johns Hopkins University via Coursera Scientific Computing
University of Washington via Coursera Introduction to Data Science
University of Washington via Coursera Web Intelligence and Big Data
Indian Institute of Technology Delhi via Coursera