Task Structure and Generalization in Graph Neural Networks
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore the interplay between task structure and generalization in graph neural networks (GNNs) through this insightful lecture by Stefanie Jegelka from MIT. Delve into the complexities of GNNs as popular tools for learning algorithmic tasks and their less understood generalization properties. Examine the relationship between target algorithms and architectural inductive biases, and discover how different network structures impact learning efficiency. Gain valuable insights into formalizing this relationship and its implications for generalization within and beyond training distributions. Learn about empirical evidence, algorithmic alignment, and the importance of training graphs in GNN performance. Understand the challenges of extrapolation and the role of ReLU feedforward networks in this context. Enhance your knowledge of deep learning and combinatorial optimization through this comprehensive exploration of task structure and generalization in graph neural networks.
Syllabus
Intro
Algorithmic Reasoning Tasks
Generalization Analysis of GNNS
Graph Neural Networks
Architectures
Algorithmic Alignment
Empirical Evidence
Alignment more generaly
Extrapolation
ReLu feedforward networks
Importance of training graphs
Summary Task Structure and generalization
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX