Transformers Meet Directed Graphs - Exploring Direction-Aware Positional Encodings
Offered By: Valence Labs via YouTube
Course Description
Overview
Explore the application of transformers to directed graphs in this comprehensive conference talk by Simon Geisler from Valence Labs. Dive into direction- and structure-aware positional encodings for directed graphs, including eigenvectors of the Magnetic Laplacian and directional random walk encodings. Learn how these techniques can be applied to domains such as source code and logic circuits. Discover the benefits of incorporating directionality information in various downstream tasks, including correctness testing of sorting networks and source code understanding. Examine the data-flow-centric graph construction approach that outperforms previous state-of-the-art methods on the Open Graph Benchmark Code2. Follow along as the speaker covers topics like sinusoidal encodings, signal processing, Graph Fourier Basis, harmonics for directed graphs, and the architecture of the proposed model.
Syllabus
- Intro
How do Language Models Encode Code
- Sinusoidal Encodings
- Signal Processing: DFT
- Graph Fourier Basis
- Magnetic Laplacian
- Harmonics for Directed Graphs
- Ambiguity of Eigenvectors
- Architecture
- Distance Prediction
- Correctness Prediction of Sorting Networks
- OpenGraphBenchmark Code 2
- Summary
- Q+A
Taught by
Valence Labs
Related Courses
NeRF - Representing Scenes as Neural Radiance Fields for View SynthesisYannic Kilcher via YouTube Perceiver - General Perception with Iterative Attention
Yannic Kilcher via YouTube LambdaNetworks- Modeling Long-Range Interactions Without Attention
Yannic Kilcher via YouTube Attention Is All You Need - Transformer Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube NeRFs- Neural Radiance Fields - Paper Explained
Aladdin Persson via YouTube