YoVDO

Interpreting Deep Neural Networks Towards Trustworthiness

Offered By: Alan Turing Institute via YouTube

Tags

Deep Learning Courses Computer Vision Courses Cosmology Courses Cell Biology Courses Trustworthy AI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive conference talk on interpreting deep neural networks for trustworthiness. Delve into the challenges of interpretability in complex machine learning models and discover the agglomerative contextual decomposition (ACD) method for interpreting neural networks. Learn how ACD attributes importance to features and feature interactions, bringing insights to NLP and computer vision problems while improving generalization. Examine the extension of ACD to the frequency domain and the development of adaptive wavelet distillation (AWD) for scientific interpretable machine learning. Understand AWD's applications in cosmology and cell biology predictions. Discuss the importance of quality control throughout the data science lifecycle for building trustworthy interpretable models. Gain valuable insights from Bin Yu of the University of California on advancing the field of interpretable and trustworthy artificial intelligence.

Syllabus

Interpreting deep neural networks towards trustworthiness - Bin Yu, University of California


Taught by

Alan Turing Institute

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX