SOTAVerified

Graph Representation Learning

The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.

Source: SIGN: Scalable Inception Graph Neural Networks

Papers

Showing 8190 of 982 papers

TitleStatusHype
GQWformer: A Quantum-based Transformer for Graph Representation Learning0
From ChebNet to ChebGibbsNetCode0
Toward Fair Graph Neural Networks Via Dual-Teacher Knowledge Distillation0
Perturbation Ontology based Graph Attention Networks0
Instance-Aware Graph Prompt Learning0
GrokFormer: Graph Fourier Kolmogorov-Arnold TransformersCode1
TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation LearningCode0
Conditional Distribution Learning on GraphsCode0
A survey on Graph Deep Representation Learning for Facial Expression Recognition0
Shedding Light on Problems with Hyperbolic Graph Learning0
Show:102550
← PrevPage 9 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pi-net-linearError (mm)0.47Unverified