SOTAVerified

Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization

2019-05-15Unverified0· sign in to hype

Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems. Such a hybrid estimator is a convex combination of two existing biased and unbiased estimators and leads to some useful property on its variance. We limit our consideration to a hybrid SARAH-SGD for nonconvex expectation problems. However, our idea can be extended to handle a broader class of estimators in both convex and nonconvex settings. We propose a new single-loop stochastic gradient descent algorithm that can achieve O(\^3^-1,^-3\)-complexity bound to obtain an -stationary point under smoothness and ^2-bounded variance assumptions. This complexity is better than O(^2^-4) often obtained in state-of-the-art SGDs when < O(^-3). We also consider different extensions of our method, including constant and adaptive step-size with single-loop, double-loop, and mini-batch variants. We compare our algorithms with existing methods on several datasets using two nonconvex models.

Tasks

Reproductions