Where Did My Model Go Wrong? Toolkits of Interpretability, Bias and Fairness to the Rescue
Offered By: Data Science Festival via YouTube
Course Description
Overview
Explore the challenges and solutions in machine learning model development through this insightful conference talk from the Data Science Festival. Delve into the process of creating ML models, identifying potential biases and errors at various stages of development. Learn about techniques to correct these issues and discover methods for interpreting ML models and verifying fair outcomes. Gain valuable knowledge on toolkits for interpretability, bias detection, and fairness assessment in machine learning. This 38-minute presentation, delivered at the 2023 MayDay Data Science Festival in London, offers practical insights for data scientists and ML practitioners looking to improve their model development processes and ensure more reliable, unbiased results.
Syllabus
Where did my model go wrong? Toolkits of Interpretability, Bias and Fairness to the rescue
Taught by
Data Science Festival
Related Courses
Artificial Intelligence Ethics in ActionLearnQuest via Coursera Human Factors in AI
Duke University via Coursera Identify principles and practices for responsible AI
Microsoft via Microsoft Learn Debiasing AI Using Amazon SageMaker
LinkedIn Learning Tech On the Go: Ethics in AI
LinkedIn Learning