SOTAVerified

Bag of Tricks for Node Classification with Graph Neural Networks

2021-03-24Code Available1· sign in to hype

Yangkun Wang, Jiarui Jin, Weinan Zhang, Yong Yu, Zheng Zhang, David Wipf

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Over the past few years, graph neural networks (GNN) and label propagation-based methods have made significant progress in addressing node classification tasks on graphs. However, in addition to their reliance on elaborate architectures and algorithms, there are several key technical details that are frequently overlooked, and yet nonetheless can play a vital role in achieving satisfactory performance. In this paper, we first summarize a series of existing tricks-of-the-trade, and then propose several new ones related to label usage, loss function formulation, and model design that can significantly improve various GNN architectures. We empirically evaluate their impact on final node classification accuracy by conducting ablation studies and demonstrate consistently-improved performance, often to an extent that outweighs the gains from more dramatic changes in the underlying GNN architecture. Notably, many of the top-ranked models on the Open Graph Benchmark (OGB) leaderboard and KDDCUP 2021 Large-Scale Challenge MAG240M-LSC benefit from these techniques we initiated.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ogbn-arxivGAT+norm. adj.+label reuseNumber of params1,441,580Unverified
ogbn-arxivGAT+norm. adj.+labelsNumber of params1,441,580Unverified
ogbn-arxivGAT+norm.adj.+labelsNumber of params1,628,440Unverified
ogbn-arxivGCN+linear+labelsNumber of params238,632Unverified
ogbn-proteinsGAT+BoTNumber of params2,484,192Unverified
ogbn-proteinsGAT+EdgeFeatureAttNumber of params2,475,232Unverified

Reproductions