YoVDO

Graph SAGE - Inductive Representation Learning on Large Graphs - GNN Paper Explained

Offered By: Aleksa Gordić - The AI Epiphany via YouTube

Tags

Graph Neural Networks (GNN) Courses Graph Theory Courses Algorithm Analysis Courses Representation Learning Courses

Course Description

Overview

Dive deep into the Graph SAGE paper, exploring the groundbreaking approach for using Graph Neural Networks (GNNs) on large-scale graphs. Learn about the key components of Graph SAGE, including its training process, neighborhood functions, and aggregator functions. Understand the method's expressiveness, mini-batch implementation, and how it addresses problems with previous graph embedding techniques. Compare Graph SAGE to other popular GNN architectures like GCN and GAT, gaining insights into its advantages and applications in processing large-scale graph data.

Syllabus

Intro
Problems with previous methods
High-level overview of the method
Some notes on the related work
Pseudo-code explanation
How do we train Graph SAGE?
Note on the neighborhood function
Aggregator functions
Results
Expressiveness of Graph SAGE
Mini-batch version
Problems with graph embedding methods drift
Comparison with GCN and GAT


Taught by

Aleksa Gordić - The AI Epiphany

Related Courses

From Graph to Knowledge Graph – Algorithms and Applications
Microsoft via edX
Social Network Analysis
Indraprastha Institute of Information Technology Delhi via Swayam
Stanford Seminar - Representation Learning for Autonomous Robots, Anima Anandkumar
Stanford University via YouTube
Unsupervised Brain Models - How Does Deep Learning Inform Neuroscience?
Yannic Kilcher via YouTube
Emerging Properties in Self-Supervised Vision Transformers - Facebook AI Research Explained
Yannic Kilcher via YouTube