SOTAVerified

Graph Representation Learning

The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.

Source: SIGN: Scalable Inception Graph Neural Networks

Papers

Showing 1120 of 982 papers

TitleStatusHype
NeuralKG-ind: A Python Library for Inductive Knowledge Graph Representation LearningCode2
Recipe for a General, Powerful, Scalable Graph TransformerCode2
Structure-Aware Transformer for Graph Representation LearningCode2
Towards Relation-centered Pooling and Convolution for Heterogeneous Graph Learning NetworksCode2
A Survey of Pretraining on Graphs: Taxonomy, Methods, and ApplicationsCode2
A Survey on Knowledge Graphs: Representation, Acquisition and ApplicationsCode2
Do Transformers Really Perform Bad for Graph Representation?Code2
Dynamic Graph Representation with Knowledge-aware Attention for Histopathology Whole Slide Image AnalysisCode2
Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation LearningCode2
A Gentle Introduction to Deep Learning for GraphsCode1
Show:102550
← PrevPage 2 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pi-net-linearError (mm)0.47Unverified