SOTAVerified

Segmented Graph-Bert for Graph Instance Modeling

2020-02-09Code Available1· sign in to hype

Jiawei Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In graph instance representation learning, both the diverse graph instance sizes and the graph node orderless property have been the major obstacles that render existing representation learning models fail to work. In this paper, we will examine the effectiveness of GRAPH-BERT on graph instance representation learning, which was designed for node representation learning tasks originally. To adapt GRAPH-BERT to the new problem settings, we re-design it with a segmented architecture instead, which is also named as SEG-BERT (Segmented GRAPH-BERT) for reference simplicity in this paper. SEG-BERT involves no node-order-variant inputs or functional components anymore, and it can handle the graph node orderless property naturally. What's more, SEG-BERT has a segmented architecture and introduces three different strategies to unify the graph instance sizes, i.e., full-input, padding/pruning and segment shifting, respectively. SEG-BERT is pre-trainable in an unsupervised manner, which can be further transferred to new tasks directly or with necessary fine-tuning. We have tested the effectiveness of SEG-BERT with experiments on seven graph instance benchmark datasets, and SEG-BERT can out-perform the comparison methods on six out of them with significant performance advantages.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COLLABSEG-BERTAccuracy78.42Unverified
IMDb-BSEG-BERTAccuracy77.2Unverified
IMDb-MSEG-BERTAccuracy53.4Unverified
MUTAGSEG-BERTAccuracy90.85Unverified
PROTEINSSEG-BERTAccuracy77.09Unverified
PTCSEG-BERTAccuracy68.86Unverified

Reproductions