Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction
Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, Inderjit S Dhillon
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/amzn/pecosOfficialIn paperpytorch★ 540
- github.com/elichienxD/deep_gcns_torchpytorch★ 9
- github.com/elichienxD/SAGN_with_SLEpytorch★ 6
- github.com/OctoberChang/GAMLP?fbclid=IwAR03ugAd7H0U_kQrkihBDMJ-Bxy3siwKVbnBTNkz_IVp5J25L0DFlTK0yfcpytorch★ 2
Abstract
Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take numerical node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related learning tasks. Recent works exploring the correlation between numerical node features and graph structure via self-supervised learning have paved the way for further performance improvements of GNNs. However, methods used for extracting numerical node features from raw data are still graph-agnostic within standard GNN pipelines. This practice is sub-optimal as it prevents one from fully utilizing potential correlations between graph topology and node attributes. To mitigate this issue, we propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT). GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information, and scales to large datasets. We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework. We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets: For example, we improve the accuracy of the top-ranked method GAMLP from 68.25\% to 69.67\%, SGC from 63.29\% to 66.10\% and MLP from 47.24\% to 61.10\% on the ogbn-papers100M dataset by leveraging GIANT.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ogbn-arxiv | GIANT-XRT+RevGAT+KD (use raw text) | Number of params | 1,304,912 | — | Unverified |
| ogbn-arxiv | GIANT-XRT+GraphSAGE (use raw text) | Number of params | 546,344 | — | Unverified |
| ogbn-arxiv | GIANT-XRT+MLP (use raw text) | Number of params | 273,960 | — | Unverified |
| ogbn-papers100M | GIANT-XRT+GAMLP+RLU (use raw text) | Number of params | 21,551,631 | — | Unverified |
| ogbn-products | GIANT-XRT+SAGN+SLE+C&S (use raw text) | Number of params | 1,548,382 | — | Unverified |
| ogbn-products | GIANT-XRT+SAGN+SLE (use raw text) | Number of params | 1,548,382 | — | Unverified |
| ogbn-products | GIANT-XRT+GraphSAINT(use raw text) | Number of params | 417,583 | — | Unverified |
| ogbn-products | GIANT-XRT+MLP (use raw text) | Number of params | 275,759 | — | Unverified |