SOTAVerified

Understanding Attention and Generalization in Graph Neural Networks

2019-05-08NeurIPS 2019Code Available0· sign in to hype

Boris Knyazev, Graham W. Taylor, Mohamed R. Amer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention. We propose an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets. Source code and datasets are available at https://github.com/bknyaz/graph_attention_pool.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COLLABWeak-supervised ChebyNetAccuracy66.97Unverified
D&DWeak-supervised ChebyNetAccuracy78.36Unverified
PROTEINSWeak-supervised ChebyNetAccuracy77.09Unverified

Reproductions