SOTAVerified

Graph Representation Learning

The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.

Source: SIGN: Scalable Inception Graph Neural Networks

Papers

Showing 476500 of 982 papers

TitleStatusHype
Temporal Graph Representation Learning with Adaptive Augmentation Contrastive0
HDGL: A hierarchical dynamic graph representation learning model for brain disorder classification0
Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational and Temporal GraphsCode0
Graph Representation Learning for Infrared and Visible Image Fusion0
DyTSCL: Dynamic graph representation via tempo-structural contrastive learningCode0
Privacy-preserving design of graph neural networks with applications to vertical federated learning0
Diversified Node Sampling based Hierarchical Transformer Pooling for Graph Representation Learning0
A Causal Disentangled Multi-Granularity Graph Classification Method0
Knowledge-Induced Medicine Prescribing Network for Medication Recommendation0
Spectral-Aware Augmentation for Enhanced Graph Representation Learning0
Graph AI in Medicine0
Enhancing the Performance of Automated Grade Prediction in MOOC using Graph Representation LearningCode0
SignGT: Signed Attention-based Graph Transformer for Graph Representation Learning0
Self-supervision meets kernel graph neural models: From architecture to augmentations0
Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural NetworksCode0
SGA: A Graph Augmentation Method for Signed Graph Neural Networks0
Topology-guided Hypergraph Transformer Network: Unveiling Structural Insights for Improved Representation0
An Edge-Aware Graph Autoencoder Trained on Scale-Imbalanced Data for Traveling Salesman Problems0
A Unified View on Neural Message Passing with Opinion Dynamics for Social Networks0
DINE: Dimensional Interpretability of Node EmbeddingsCode0
Transformers are efficient hierarchical chemical graph learnersCode0
Learning node representation via Motif CoarseningCode0
Augment to Interpret: Unsupervised and Inherently Interpretable Graph EmbeddingsCode0
Graph Representation Learning Towards Patents Network Analysis0
Deep Prompt Tuning for Graph Transformers0
Show:102550
← PrevPage 20 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pi-net-linearError (mm)0.47Unverified