SOTAVerified

Learning Fast Approximations of Sparse Nonlinear Regression

2020-10-26Code Available0· sign in to hype

Yuhai Song, Zhong Cao, Kailun Wu, Ziang Yan, ChangShui Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The idea of unfolding iterative algorithms as deep neural networks has been widely applied in solving sparse coding problems, providing both solid theoretical analysis in convergence rate and superior empirical performance. However, for sparse nonlinear regression problems, a similar idea is rarely exploited due to the complexity of nonlinearity. In this work, we bridge this gap by introducing the Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA), which can attain a linear convergence under suitable conditions. Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.

Tasks

Reproductions