YoVDO

HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers

Offered By: Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube

Tags

Graph Theory Courses Machine Learning Courses Neural Networks Courses Transformers Courses Attention Mechanisms Courses Dynamic Graphs Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore dynamic graph representation learning with efficient transformers in this conference talk from the Second Learning on Graphs Conference (LoG'23). Dive into the HOT model, which enhances link prediction by leveraging higher-order graph structures. Discover how k-hop neighbors and subgraphs are encoded into the attention matrix of transformers to improve accuracy. Learn about the challenges of increased memory pressure and the innovative solutions using hierarchical attention matrices. Examine the model's architecture, including encoding higher-order structures, patching, alignment, concatenation, and the block recurrent transformer. Compare HOT's performance against other dynamic graph representation learning schemes and understand its potential applications in various dynamic graph learning workloads.

Syllabus

Introduction: Link Prediction
Introduction: Higher-Order Graph Structures
Higher-Order Enhanced Pipeline
Temporal Higher-Order Structures
Formal Setting of Dynamic Link Prediction
Model Architecture: Encoding Higher-Order Structures
Model Architecture: Patching, Alignment and Concatenation
Model Architecture: Block Recurrent Transformer
Evaluation


Taught by

Scalable Parallel Computing Lab, SPCL @ ETH Zurich

Related Courses

Linear Circuits
Georgia Institute of Technology via Coursera
مقدمة في هندسة الطاقة والقوى
King Abdulaziz University via Rwaq (رواق)
Magnetic Materials and Devices
Massachusetts Institute of Technology via edX
Linear Circuits 2: AC Analysis
Georgia Institute of Technology via Coursera
Transmisión de energía eléctrica
Tecnológico de Monterrey via edX