SOTAVerified

Mutual Information Maximization in Graph Neural Networks

2019-05-21Code Available0· sign in to hype

Xinhan Di, Pengqian Yu, Rui Bu, Mingchao Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

A variety of graph neural networks (GNNs) frameworks for representation learning on graphs have been recently developed. These frameworks rely on aggregation and iteration scheme to learn the representation of nodes. However, information between nodes is inevitably lost in the scheme during learning. In order to reduce the loss, we extend the GNNs frameworks by exploring the aggregation and iteration scheme in the methodology of mutual information. We propose a new approach of enlarging the normal neighborhood in the aggregation of GNNs, which aims at maximizing mutual information. Based on a series of experiments conducted on several benchmark datasets, we show that the proposed approach improves the state-of-the-art performance for four types of graph tasks, including supervised and semi-supervised graph classification, graph link prediction and graph edge generation and classification.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
20NEWSsKNN-LDSAccuracy47.9Unverified
CancersKNN-LDSAccuracy95.7Unverified
CiteseersKNN-LDSAccuracy73.7Unverified
COLLABsGINAccuracy80.71Unverified
CorasKNN-LDSAccuracy72.3Unverified
DigitssKNN-LDSAccuracy92.5Unverified
IMDb-BsGINAccuracy77.94Unverified
IMDb-MsGINAccuracy54.52Unverified
MUTAGsGINAccuracy94.14Unverified
NCI1sGINAccuracy83.85Unverified
PROTEINSsGINAccuracy78.97Unverified
PTCsGINAccuracy73.56Unverified
WinesKNN-LDSAccuracy98Unverified

Reproductions