SOTAVerified

RGI : Regularized Graph Infomax for self-supervised learning on graphs

2023-03-15Unverified0· sign in to hype

Oscar Pina, Verónica Vilaplana

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Self-supervised learning is gaining considerable attention as a solution to avoid the requirement of extensive annotations in representation learning on graphs. We introduce Regularized Graph Infomax (RGI), a simple yet effective framework for node level self-supervised learning on graphs that trains a graph neural network encoder by maximizing the mutual information between node level local and global views, in contrast to previous works that employ graph level global views. The method promotes the predictability between views while regularizing the covariance matrices of the representations. Therefore, RGI is non-contrastive, does not depend on complex asymmetric architectures nor training tricks, is augmentation-free and does not rely on a two branch architecture. We run RGI on both transductive and inductive settings with popular graph benchmarks and show that it can achieve state-of-the-art performance regardless of its simplicity.

Tasks

Reproductions