Theory of Graph Neural Networks: Representation and Learning
Offered By: International Mathematical Union via YouTube
Course Description
Overview
Explore a comprehensive lecture on Graph Neural Networks (GNNs) that delves into their theoretical foundations, representation capabilities, and learning properties. Gain insights into the approximation and learning characteristics of message passing GNNs and higher-order GNNs, with a focus on function approximation, estimation, generalization, and extrapolation. Discover connections between GNNs and graph isomorphism, equivariant functions, local algorithms, and dynamic programming. Examine the challenges and potential solutions for improving discriminative power, generalization, and extrapolation in GNNs. Analyze the computational structure and algorithmic alignment of these models, and consider open questions in the field. Enhance your understanding of GNNs' applications in machine learning tasks involving nodes, graphs, and point configurations.
Syllabus
Intro
Machine Learning in one picture
Machine Learning with Graph Data: Applicat
Outline
GNNS: Origins and Relations
Message Passing Graph Neural Networks
Message Passing for Node Embedding
Fully connected Neural Network (FNN)
Message Passing Tree
Function Approximation and Graph Distincti
Color refinement/Weisfeiler-Leman algorith
Improving discriminative power
Node IDs and Local Algorithms
The challenge with generalization
Bounding the generalization gap
Neural Tangent Kernel
Computational structure
Algorithmic Alignment
Big picture: when may extrapolation "work"?
Extrapolation in fully connected ReLU netwo
Implications for the full GNN
Open Questions...
Taught by
International Mathematical Union
Related Courses
Reinforcement LearningIndian Institute of Technology Madras via Swayam Numerical Analysis
Vidyasagar University via Swayam Reinforcement Learning Course
YouTube Numerical Methods in EXCEL/VBA PROGRAMING
Udemy Taylor Series
3Blue1Brown via YouTube