SOTAVerified

Graph Representation Learning

The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.

Source: SIGN: Scalable Inception Graph Neural Networks

Papers

Showing 451500 of 982 papers

TitleStatusHype
Graph Transformer GANs with Graph Masked Modeling for Architectural Layout Generation0
Tensor Graph Convolutional Network for Dynamic Graph Representation Learning0
Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks0
Adversarial Representation with Intra-Modal and Inter-Modal Graph Contrastive Learning for Multimodal Emotion Recognition0
PUMA: Efficient Continual Graph Learning for Node Classification with Graph CondensationCode0
Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive LearningCode0
Social Recommendation through Heterogeneous Graph Modeling of the Long-term and Short-term Preference Defined by Dynamic Time SpansCode0
Domain Adaptive Graph Classification0
LightGCN: Evaluated and EnhancedCode0
scBiGNN: Bilevel Graph Representation Learning for Cell Type Classification from Single-cell RNA Sequencing Data0
Dynamic Spiking Framework for Graph Neural Networks0
Symmetry Breaking and Equivariant Neural Networks0
EdgePruner: Poisoned Edge Pruning in Graph Contrastive Learning0
Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level Graph Representation Learning0
Understanding Community Bias Amplification in Graph Representation Learning0
On the Initialization of Graph Neural NetworksCode0
Large-scale Graph Representation Learning of Dynamic Brain Connectome with Transformers0
HGPROMPT: Bridging Homogeneous and Heterogeneous Graphs for Few-shot Prompt Learning0
Normed Spaces for Graph EmbeddingCode0
HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers0
Cycle Invariant Positional Encoding for Graph Representation LearningCode0
Careful Selection and Thoughtful Discarding: Graph Explicit Pooling Utilizing Discarded Nodes0
Cross-View Graph Consistency Learning for Invariant Graph RepresentationsCode0
Classification of developmental and brain disorders via graph convolutional aggregation0
Topology Only Pre-Training: Towards Generalised Multi-Domain Graph ModelsCode0
Temporal Graph Representation Learning with Adaptive Augmentation Contrastive0
HDGL: A hierarchical dynamic graph representation learning model for brain disorder classification0
Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational and Temporal GraphsCode0
Graph Representation Learning for Infrared and Visible Image Fusion0
DyTSCL: Dynamic graph representation via tempo-structural contrastive learningCode0
Privacy-preserving design of graph neural networks with applications to vertical federated learning0
Diversified Node Sampling based Hierarchical Transformer Pooling for Graph Representation Learning0
A Causal Disentangled Multi-Granularity Graph Classification Method0
Knowledge-Induced Medicine Prescribing Network for Medication Recommendation0
Spectral-Aware Augmentation for Enhanced Graph Representation Learning0
Graph AI in Medicine0
Enhancing the Performance of Automated Grade Prediction in MOOC using Graph Representation LearningCode0
SignGT: Signed Attention-based Graph Transformer for Graph Representation Learning0
Self-supervision meets kernel graph neural models: From architecture to augmentations0
Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural NetworksCode0
SGA: A Graph Augmentation Method for Signed Graph Neural Networks0
Topology-guided Hypergraph Transformer Network: Unveiling Structural Insights for Improved Representation0
An Edge-Aware Graph Autoencoder Trained on Scale-Imbalanced Data for Traveling Salesman Problems0
A Unified View on Neural Message Passing with Opinion Dynamics for Social Networks0
DINE: Dimensional Interpretability of Node EmbeddingsCode0
Transformers are efficient hierarchical chemical graph learnersCode0
Learning node representation via Motif CoarseningCode0
Augment to Interpret: Unsupervised and Inherently Interpretable Graph EmbeddingsCode0
Graph Representation Learning Towards Patents Network Analysis0
Deep Prompt Tuning for Graph Transformers0
Show:102550
← PrevPage 10 of 20Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pi-net-linearError (mm)0.47Unverified