YoVDO

Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks

Offered By: ACM SIGPLAN via YouTube

Tags

Interpretability Courses Image Classification Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a neurosymbolic framework called NeSyFOLD-G that enhances the interpretability of Convolutional Neural Networks (CNNs) in image classification tasks. Learn about the innovative kernel-grouping algorithm that reduces the size of generated rule-sets, improving overall interpretability. Discover how the framework uses cosine-similarity between feature maps to group similar kernels, binarizes kernel group outputs, and employs the FOLD-SE-M algorithm to generate symbolic rule-sets. Understand the process of mapping predicates to human-understandable concepts using semantic segmentation masks. Gain insights into replacing CNN layers with rule-sets to create the NeSy-G model and using the s(CASP) system for prediction justification. Delve into a novel algorithm for labeling predicates with corresponding semantic concepts, bridging the gap between connectionist knowledge and symbolic representation in deep learning.

Syllabus

[PADL'24] Using Logic Programming and Kernel-Grouping for Improving Interpretability of Co...


Taught by

ACM SIGPLAN

Related Courses

Clasificación de imágenes: ¿cómo reconocer el contenido de una imagen?
Universitat Autònoma de Barcelona (Autonomous University of Barcelona) via Coursera
Core ML: Machine Learning for iOS
Udacity
Fundamentals of Deep Learning for Computer Vision
Nvidia via Independent
Computer Vision and Image Analysis
Microsoft via edX
Using GPUs to Scale and Speed-up Deep Learning
IBM via edX