SOTAVerified

Make _1 Regularization Effective in Training Sparse CNN

2018-07-11Unverified0· sign in to hype

Juncai He, Xiaodong Jia, Jinchao Xu, Lian Zhang, Liang Zhao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Compressed Sensing using _1 regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work. We first demonstrate that the commonly used stochastic gradient decent (SGD) and variants training algorithm is not an appropriate match with _1 regularization and then replace it with a different training algorithm based on a regularized dual averaging (RDA) method. RDA was originally designed specifically for convex problem, but with new theoretical insight and algorithmic modifications (using proper initialization and adaptivity), we have made it an effective match with _1 regularization to achieve a state-of-the-art sparsity for CNN compared to other weight pruning methods without compromising accuracy (achieving 95\% sparsity for ResNet18 on CIFAR-10, for example).

Tasks

Reproductions