SOTAVerified

Graph Neural Network Acceleration via Matrix Dimension Reduction

2021-01-01Unverified0· sign in to hype

Shunhua Jiang, Yunze Man, Zhao Song, Danyang Zhuo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Graph Neural Networks (GNNs) have become the de facto method for machine learning on graph data (e.g., social networks, protein structures, code ASTs), but they require significant time and resource to train. One alternative method is Graph Neural Tangent Kernel (GNTK), a kernel method that corresponds to infinitely wide multi-layer GNNs. GNTK's parameters can be solved directly in a single step, avoiding time-consuming gradient descent. Today, GNTK is the state-of-the-art method to achieve high training speed without compromising accuracy. Unfortunately, solving for the kernel and searching for parameters can still take hours to days on real-world graphs. The current computation of GNTK has running time O(N^4), where N is the number of nodes in the graph. This prevents GNTK from scaling to datasets that contain large graphs. Theoretically, we present two techniques to speed up GNTK training while preserving the generalization error: (1) We use a novel matrix decoupling method to reduce matrix dimensions during the kernel solving. This allows us to reduce the dominated computation bottleneck term from O(N^4) to O(N^3). (2) We apply sketching to further reduce the bottleneck term to o(N^), where 2.373 is the exponent of current matrix multiplication. Experimentally, we demonstrate that our approaches speed up kernel learning by up to 19 on real-world benchmark datasets.

Tasks

Reproductions