SOTAVerified

Improved large-scale graph learning through ridge spectral sparsification

2018-07-01ICML 2018Unverified0· sign in to hype

Daniele Calandriello, Alessandro Lazaric, Ioannis Koutis, Michal Valko

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The representation and learning benefits of methods based on graph Laplacians, such as Laplacian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. Nonetheless, the exact versions of these methods scale poorly with the number of nodes n of the graph. In this paper, we combine a spectral sparsification routine with Laplacian learning. Given a graph G as input, our algorithm computes a sparsifier in a distributed way in O(n^3(n)) time, O(m^3(n)) work and O(n(n)) memory, using only (n) rounds of communication. Furthermore, motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrally-similar graph, we are able to bound the error induced by the sparsification for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon co-purchase graph and compare to the state-of-the-art heuristics.

Tasks

Reproductions