Alternating Iteratively Reweighted _1 and Subspace Newton Algorithms for Nonconvex Sparse Optimization
Hao Wang, Xiangyu Yang, Yichen Zhu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/yuqiawu/hpgsrnOfficialIn papernone★ 1
Abstract
This paper presents a novel hybrid algorithm for minimizing the sum of a continuously differentiable loss function and a nonsmooth, possibly nonconvex, sparse regularization function. The proposed method alternates between solving a reweighted _1-regularized subproblem and performing an inexact subspace Newton step. The reweighted _1-subproblem allows for efficient closed-form solutions via the soft-thresholding operator, avoiding the computational overhead of proximity operator calculations. As the algorithm approaches an optimal solution, it maintains a stable support set, ensuring that nonzero components stay uniformly bounded away from zero. It then switches to a perturbed regularized Newton method, further accelerating the convergence. We prove global convergence to a critical point and, under suitable conditions, demonstrate that the algorithm exhibits local linear and quadratic convergence rates. Numerical experiments show that our algorithm outperforms existing methods in both efficiency and solution quality across various model prediction problems.