SOTAVerified

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

2020-05-14ICLR 2020Code Available1· sign in to hype

Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K. -H. So

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same number of training epochs as dense models. Dynamic Sparse Training achieves the state of the art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence for the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.

Tasks

Reproductions