Getting Insight Out Of and Back Into Deep Neural Networks
Offered By: BSidesLV via YouTube
Course Description
Overview
Explore deep neural networks and their interpretability in this insightful conference talk from BSidesLV 2017. Delve into techniques for extracting meaningful information from complex neural network models and learn how to reintegrate this knowledge back into the networks. Gain valuable insights on improving model transparency, understanding decision-making processes, and enhancing the overall performance of deep learning systems. Discover practical approaches to demystifying the black box nature of neural networks and leveraging these insights for more effective and interpretable AI applications.
Syllabus
GT - Getting Insight Out Of and Back Into Deep Neural Networks - Richard Harang
Taught by
BSidesLV
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX