YoVDO

Interpretable Chirality-Aware Graph Neural Networks for QSAR Modeling in Drug Discovery

Offered By: Valence Labs via YouTube

Tags

Drug Discovery Courses Molecular Modeling Courses Interpretability Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk on developing interpretable chirality-aware graph neural networks for quantitative structure-activity relationship modeling in drug discovery. Dive into the limitations of current graph neural networks in capturing molecular chirality and learn about the proposed Molecular-Kernel Graph Neural Network (MolKGNN) approach. Discover how MolKGNN achieves SE(3)-/conformation invariance and interpretability through molecular graph convolution and similarity score propagation. Examine the comprehensive evaluation of MolKGNN across nine datasets featuring high class imbalance, and understand its superiority over other GNNs in computer-aided drug discovery. Gain insights into the interpretability of learned kernels and their alignment with domain knowledge. The talk covers background on message passing schemes, impacts of chirality, intuition from image convolution, similarity score calculation, MolKGNN overview, screening datasets, evaluation metrics, comparison with 3D GNNs, and interpretability results.

Syllabus

- Intro
- Background: Message Passing Scheme
- Graph Neural Network Limitations
- The Impacts of Chirality
- Intuition from Image Convolution
- Similarity Score Calculation
- MolKGNN Overview
- Screening Datasets
- Metrics for Evaluation & Results
- Can MolKGNN Outperform 3D GNNs?
- Intepretability Result
- Conclusion
- Q&A


Taught by

Valence Labs

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube
Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube
Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube