Task Structure and Generalization in Graph Neural Networks
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore the interplay between task structure and generalization in graph neural networks (GNNs) through this insightful lecture by Stefanie Jegelka from MIT. Delve into the complexities of GNNs as popular tools for learning algorithmic tasks and their less understood generalization properties. Examine the relationship between target algorithms and architectural inductive biases, and discover how different network structures impact learning efficiency. Gain valuable insights into formalizing this relationship and its implications for generalization within and beyond training distributions. Learn about empirical evidence, algorithmic alignment, and the importance of training graphs in GNN performance. Understand the challenges of extrapolation and the role of ReLU feedforward networks in this context. Enhance your knowledge of deep learning and combinatorial optimization through this comprehensive exploration of task structure and generalization in graph neural networks.
Syllabus
Intro
Algorithmic Reasoning Tasks
Generalization Analysis of GNNS
Graph Neural Networks
Architectures
Algorithmic Alignment
Empirical Evidence
Alignment more generaly
Extrapolation
ReLu feedforward networks
Importance of training graphs
Summary Task Structure and generalization
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
An Introduction to Computer NetworksStanford University via Independent Introduction aux réseaux cellulaires
Institut Mines-Télécom via Independent Introduction aux réseaux mobiles
Institut Mines-Télécom via France Université Numerique Comprendre la 4G
Institut Mines-Télécom via France Université Numerique 4G Network Essentials
Institut Mines-Télécom via edX