An inexact subsampled proximal Newton-type method for large-scale machine learning
Xuanqing Liu, Cho-Jui Hsieh, Jason D. Lee, Yuekai Sun
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an -suboptimal point in O(d(n + d)(1)) FLOPS, where n is number of samples, d is feature dimension, and is the condition number. As long as n > d, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires O(d(n + n)(1)) FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for _1-regularized logistic regression on real datasets.