SOTAVerified

Sparse Networks from Scratch: Faster Training without Losing Performance

2019-07-10ICLR 2020Code Available1· sign in to hype

Tim Dettmers, Luke Zettlemoyer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10WRN-22-8 (Sparse Momentum)Percentage correct95.04Unverified
MNISTLeNet 300-100 (Sparse Momentum)Percentage error1.26Unverified

Reproductions