Interpreting Deep Neural Networks Towards Trustworthiness
Offered By: Alan Turing Institute via YouTube
Course Description
Overview
Explore a comprehensive conference talk on interpreting deep neural networks for trustworthiness. Delve into the challenges of interpretability in complex machine learning models and discover the agglomerative contextual decomposition (ACD) method for interpreting neural networks. Learn how ACD attributes importance to features and feature interactions, bringing insights to NLP and computer vision problems while improving generalization. Examine the extension of ACD to the frequency domain and the development of adaptive wavelet distillation (AWD) for scientific interpretable machine learning. Understand AWD's applications in cosmology and cell biology predictions. Discuss the importance of quality control throughout the data science lifecycle for building trustworthy interpretable models. Gain valuable insights from Bin Yu of the University of California on advancing the field of interpretable and trustworthy artificial intelligence.
Syllabus
Interpreting deep neural networks towards trustworthiness - Bin Yu, University of California
Taught by
Alan Turing Institute
Related Courses
Creating Trustworthy and Ethical Artificial IntelligenceSAP Learning AI and the Law: Implementing Trustworthy AI
Pluralsight Trustworthy AI for Healthcare Management
Politecnico di Milano via Coursera Solana Larsen- Who Has Power Over AI?
Stanford University via YouTube Human-Centered AI: Challenges and Governance in News Automation
Association for Computing Machinery (ACM) via YouTube