Adversarial network embedding with bootstrapped representations for sparse networks
Zelong Wu1, Yidan Wang, Guoliang Lin, Junlong Liu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/wuzelong/ANEBRpytorch★ 1
Abstract
The inherent sparsity of real-world networks presents challenges in learning-rich embeddings and accurately reconstructing networks. To address these challenges, a novel method termed Adversarial Network Embedding with Bootstrapped Representations (ANEBR) is proposed. Firstly, a novel network augmentation method is employed for positive sampling. ANEBR utilizes the Katz Index to extract higher-order latent information and refines it with α-entmax. The crucial information is extracted while minimizing noise generation. Secondly, ANEBR circumvents negative sampling by learning bootstrapped representations. Building on bootstrapped representations from the BYOL algorithm, ANEBR incorporates the GAN techniques to align the learned embeddings nonlinearly. Finally, ANEBR attains accurate network reconstruction by imposing a low-rank constraint on the reconstruction error through the nuclear norm. Extensive experiments with statistical and sensitivity analyses demonstrate that ANEBR outperforms state-of-the-art methods in various tasks. Specifically, ANEBR reconstructs the PPI network with a precision of 0.9992, marking a relative improvement of 6.65%. Code is available at https://github. com/wuzelong/ANEBR.