SOTAVerified

Universal Graph Transformer Self-Attention Networks

2019-09-26Code Available0· sign in to hype

Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The transformer self-attention network has been extensively used in research domains such as computer vision, image processing, and natural language processing. But it has not been actively used in graph neural networks (GNNs) where constructing an advanced aggregation function is essential. To this end, we present U2GNN, an effective GNN model leveraging a transformer self-attention mechanism followed by a recurrent transition, to induce a powerful aggregation function to learn graph representations. Experimental results show that the proposed U2GNN achieves state-of-the-art accuracies on well-known benchmark datasets for graph classification. Our code is available at: https://github.com/daiquocnguyen/Graph-Transformer

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COLLABU2GNNAccuracy77.84Unverified
COLLABU2GNN (Unsupervised)Accuracy95.62Unverified
D&DU2GNNAccuracy80.23Unverified
D&DU2GNN (Unsupervised)Accuracy95.67Unverified
IMDb-BU2GNN (Unsupervised)Accuracy96.41Unverified
IMDb-BU2GNNAccuracy77.04Unverified
IMDb-MU2GNN (Unsupervised)Accuracy89.2Unverified
IMDb-MU2GNNAccuracy53.6Unverified
MUTAGU2GNNAccuracy89.97Unverified
MUTAGU2GNN (Unsupervised)Accuracy88.47Unverified
PROTEINSU2GNNAccuracy78.53Unverified
PROTEINSU2GNN (Unsupervised)Accuracy80.01Unverified
PTCU2GNNAccuracy69.63Unverified
PTCU2GNN (Unsupervised)Accuracy91.81Unverified

Reproductions