Can You Trust Your AI?
Offered By: Devoxx via YouTube
Course Description
Overview
Explore the critical topic of trustworthy AI in this 46-minute conference talk from Devoxx. Delve into the world of Explainable AI (XAI) and its importance in making AI systems more transparent and reliable, especially in high-impact domains like healthcare and finance. Learn about the TrustyAI initiative at Red Hat, which focuses on enhancing decision trustworthiness through explainability, runtime tracing, and accountability. Discover various types of explanations, including local and knowledgeable explanations, and understand the importance of minimizing complexity while adhering to scientific methods. Examine concepts such as fairness, monitoring, and microservices in AI systems. Gain insights into practical applications, theoretical work on explainability, and the extraction of explanations from AI models. Conclude with a Q&A session addressing the significance of explainability in modern AI development and deployment.
Syllabus
Intro
Artificial Intelligence
Problems with AI
Right of explanation
Different type of explanation
Local explanation
Knowledgeable explanation
Minimize if possible
Use a science method
Fairness
Trust AI
Cogito
Microservices
Monitoring
Library
References
QA
Explainability
Extracting explanations
Is explainability important
Theoretical work on explainability
Taught by
Devoxx
Related Courses
Artificial Intelligence Ethics in ActionLearnQuest via Coursera Human Factors in AI
Duke University via Coursera Identify principles and practices for responsible AI
Microsoft via Microsoft Learn Debiasing AI Using Amazon SageMaker
LinkedIn Learning Tech On the Go: Ethics in AI
LinkedIn Learning