SOTAVerified

Stochastic gradient-free descents

2019-12-31Unverified0· sign in to hype

Xiaopeng Luo, Xin Xu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper we propose stochastic gradient-free methods and accelerated methods with momentum for solving stochastic optimization problems. All these methods rely on stochastic directions rather than stochastic gradients. We analyze the convergence behavior of these methods under the mean-variance framework, and also provide a theoretical analysis about the inclusion of momentum in stochastic settings which reveals that the momentum term we used adds a deviation of order O(1/k) but controls the variance at the order O(1/k) for the kth iteration. So it is shown that, when employing a decaying stepsize _k=O(1/k), the stochastic gradient-free methods can still maintain the sublinear convergence rate O(1/k) and the accelerated methods with momentum can achieve a convergence rate O(1/k^2) in probability for the strongly convex objectives with Lipschitz gradients; and all these methods converge to a solution with a zero expected gradient norm when the objective function is nonconvex, twice differentiable and bounded below.

Tasks

Reproductions