SOTAVerified

Sparse matrix products for neural network compression

2021-01-01Unverified0· sign in to hype

Luc Giffon, Hachem Kadri, Stephane Ayache, Ronan Sicre, Thierry Artieres

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Over-parameterization of neural networks is a well known issue that comes along with their great performance. Among the many approaches proposed to tackle this problem, low-rank tensor decompositions are largely investigated to compress deep neural networks. Such techniques rely on a low-rank assumption of the layer weight tensors that does not always hold in practice. Following this observation, this paper studies sparsity inducing techniques to build new sparse matrix product layer for high-rate neural networks compression. Specifically, we explore recent advances in sparse optimization to replace each layer's weight matrix, either convolutional or fully connected, by a product of sparse matrices. Our experiments validate that our approach provides a better compression-accuracy trade-off than most popular low-rank-based compression techniques.

Tasks

Reproductions