SOTAVerified

Graph Representation Learning

The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.

Source: SIGN: Scalable Inception Graph Neural Networks

Papers

Showing 311320 of 982 papers

TitleStatusHype
DINE: Dimensional Interpretability of Node EmbeddingsCode0
A Unified View on Neural Message Passing with Opinion Dynamics for Social Networks0
Learning node representation via Motif CoarseningCode0
Augment to Interpret: Unsupervised and Inherently Interpretable Graph EmbeddingsCode0
Graph Representation Learning Towards Patents Network Analysis0
Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method for Few-shot Node TasksCode1
Deep Prompt Tuning for Graph Transformers0
UniKG: A Benchmark and Universal Embedding for Large-Scale Knowledge GraphsCode0
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning0
Spatio-Temporal Contrastive Self-Supervised Learning for POI-level Crowd Flow Inference0
Show:102550
← PrevPage 32 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pi-net-linearError (mm)0.47Unverified