YoVDO

Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks

Offered By: ACM SIGPLAN via YouTube

Tags

Interpretability Courses Image Classification Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a neurosymbolic framework called NeSyFOLD-G that enhances the interpretability of Convolutional Neural Networks (CNNs) in image classification tasks. Learn about the innovative kernel-grouping algorithm that reduces the size of generated rule-sets, improving overall interpretability. Discover how the framework uses cosine-similarity between feature maps to group similar kernels, binarizes kernel group outputs, and employs the FOLD-SE-M algorithm to generate symbolic rule-sets. Understand the process of mapping predicates to human-understandable concepts using semantic segmentation masks. Gain insights into replacing CNN layers with rule-sets to create the NeSy-G model and using the s(CASP) system for prediction justification. Delve into a novel algorithm for labeling predicates with corresponding semantic concepts, bridging the gap between connectionist knowledge and symbolic representation in deep learning.

Syllabus

[PADL'24] Using Logic Programming and Kernel-Grouping for Improving Interpretability of Co...


Taught by

ACM SIGPLAN

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube
Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube
Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube