Interpreting Deep Neural Networks Towards Trustworthiness - IPAM at UCLA
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a comprehensive lecture on interpreting deep neural networks for trustworthiness presented by Bin Yu from the University of California, Berkeley. Delve into the concept of interpretable machine learning and discover the agglomerative contextual decomposition (ACD) method for neural network interpretation. Learn about the adaptive wavelet distillation (AWD) technique, which extends ACD to the frequency domain, and its applications in cosmology and cell biology predictions. Examine the importance of a quality-controlled data science life cycle and the Predictability Computability Stability (PCS) framework for building trustworthy interpretable models. Gain valuable insights into the challenges and solutions for making complex deep learning models more transparent and reliable.
Syllabus
Bin Yu - Interpreting Deep Neural Networks towards Trustworthiness - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Interpretable Machine Learning Applications: Part 1Coursera Project Network via Coursera Interpretable Machine Learning Applications: Part 2
Coursera Project Network via Coursera Interpretable machine learning applications: Part 3
Coursera Project Network via Coursera Interpretable Machine Learning Applications: Part 4
Coursera Project Network via Coursera Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
LinkedIn Learning