Explainable AI (XAI)
Offered By: Duke University via Coursera
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
In an era where Artificial Intelligence (AI) is rapidly transforming high-risk domains like healthcare, finance, and criminal justice, the ability to develop AI systems that are not only accurate but also transparent and trustworthy is critical. The Explainable AI (XAI) Specialization is designed to empower AI professionals, data scientists, machine learning engineers, and product managers with the knowledge and skills needed to create AI solutions that meet the highest standards of ethical and responsible AI.
Taught by Dr. Brinnae Bent, an expert in bridging the gap between research and industry in machine learning, this course series leverages her extensive experience leading projects and developing impactful algorithms for some of the largest companies in the world. Dr. Bent's work, ranging from helping people walk to noninvasively monitoring glucose, underscores the meaningful applications of AI in real-world scenarios.
Throughout this series, learners will explore key topics including Explainable AI (XAI) concepts, interpretable machine learning, and advanced explainability techniques for large language models (LLMs) and generative computer vision models. Hands-on programming labs, using Python to implement local and global explainability techniques, and case studies offer practical learning. This series is ideal for professionals with a basic to intermediate understanding of machine learning concepts like supervised learning and neural networks.
Syllabus
Course 1: Developing Explainable AI (XAI)
- Offered by Duke University. As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal ... Enroll for free.
Course 2: Interpretable Machine Learning
- Offered by Duke University. As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal ... Enroll for free.
Course 3: Explainable Machine Learning (XAI)
- Offered by Duke University. As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal ... Enroll for free.
- Offered by Duke University. As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal ... Enroll for free.
Course 2: Interpretable Machine Learning
- Offered by Duke University. As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal ... Enroll for free.
Course 3: Explainable Machine Learning (XAI)
- Offered by Duke University. As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal ... Enroll for free.
Courses
-
As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course provides a comprehensive introduction to Explainable AI (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles. Through discussions, case studies, and real-world examples, you will gain the following skills: 1. Define key XAI terminology and concepts, including interpretability, explainability, and transparency. 2. Evaluate different interpretable and explainable approaches, understanding their trade-offs and applications. 3. Integrate XAI explanations into decision-making processes for enhanced transparency and trust. 4. Assess XAI systems for robustness, privacy, and ethical considerations, ensuring responsible AI development. 5. Apply XAI techniques to cutting-edge areas like Generative AI, staying ahead of emerging trends. This course is ideal for AI professionals, data scientists, machine learning engineers, product managers, and anyone involved in developing or deploying AI systems. By mastering XAI, you'll be equipped to create AI solutions that are not only powerful but also interpretable, ethical, and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have experience building AI products and a basic understanding of machine learning concepts like supervised learning and neural networks. The course will cover explainable AI techniques and applications without deep technical details.
-
As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Explainable Machine Learning (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles. Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills: 1. Implement local explainable techniques like LIME, SHAP, and ICE plots using Python. 2. Implement global explainable techniques such as Partial Dependence Plots (PDP) and Accumulated Local Effects (ALE) plots in Python. 3. Apply example-based explanation techniques to explain machine learning models using Python. 4. Visualize and explain neural network models using SOTA techniques in Python. 5. Critically evaluate interpretable attention and saliency methods for transformer model explanations. 6. Explore emerging approaches to explainability for large language models (LLMs) and generative computer vision models. This course is ideal for data scientists or machine learning engineers who have a firm grasp of machine learning but have had little exposure to XAI concepts. By mastering XAI approaches, you'll be equipped to create AI solutions that are not only powerful but also interpretable, ethical, and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have an intermediate understanding of machine learning concepts like supervised learning and neural networks.
-
As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Interpretable Machine Learning, empowering you to develop AI solutions that are aligned with responsible AI principles. You will also gain an understanding of the emerging field of Mechanistic Interpretability and its use in understanding large language models. Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills: 1. Describe interpretable machine learning and differentiate between interpretability and explainability. 2. Explain and implement regression models in Python. 3. Demonstrate knowledge of generalized models in Python. 4. Explain and implement decision trees in Python. 5. Demonstrate knowledge of decision rules in Python. 6. Define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. 7. Explain foundational Mechanistic Interpretability concepts, including features and circuits 8. Describe the Superposition Hypothesis 9. Define Representation Learning and be able to analyze current research on scaling Representation Learning to LLMs. This course is ideal for data scientists or machine learning engineers who have a firm grasp of machine learning but have had little exposure to interpretability concepts. By mastering Interpretable Machine Learning approaches, you'll be equipped to create AI solutions that are not only powerful but also ethical and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have an intermediate understanding of machine learning concepts like supervised learning and neural networks.
Taught by
Brinnae Bent, PhD
Tags
Related Courses
Design Computing: 3D Modeling in Rhinoceros with Python/RhinoscriptUniversity of Michigan via Coursera A Practical Introduction to Test-Driven Development
LearnQuest via Coursera FinTech for Finance and Business Leaders
ACCA via edX Access Bioinformatics Databases with Biopython
Coursera Project Network via Coursera Accounting Data Analytics
University of Illinois at Urbana-Champaign via Coursera