HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers
Offered By: Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube
Course Description
Overview
Explore dynamic graph representation learning with efficient transformers in this conference talk from the Second Learning on Graphs Conference (LoG'23). Dive into the HOT model, which enhances link prediction by leveraging higher-order graph structures. Discover how k-hop neighbors and subgraphs are encoded into the attention matrix of transformers to improve accuracy. Learn about the challenges of increased memory pressure and the innovative solutions using hierarchical attention matrices. Examine the model's architecture, including encoding higher-order structures, patching, alignment, concatenation, and the block recurrent transformer. Compare HOT's performance against other dynamic graph representation learning schemes and understand its potential applications in various dynamic graph learning workloads.
Syllabus
Introduction: Link Prediction
Introduction: Higher-Order Graph Structures
Higher-Order Enhanced Pipeline
Temporal Higher-Order Structures
Formal Setting of Dynamic Link Prediction
Model Architecture: Encoding Higher-Order Structures
Model Architecture: Patching, Alignment and Concatenation
Model Architecture: Block Recurrent Transformer
Evaluation
Taught by
Scalable Parallel Computing Lab, SPCL @ ETH Zurich
Related Courses
Aplicaciones de la teoría de grafos a la vida realMiríadax Aplicaciones de la Teoría de Grafos a la vida real
Universitat Politècnica de València via UPV [X] Introduction to Computational Thinking and Data Science
Massachusetts Institute of Technology via edX Genome Sequencing (Bioinformatics II)
University of California, San Diego via Coursera Algorithmic Information Dynamics: From Networks to Cells
Santa Fe Institute via Complexity Explorer