Stochastic Nonconvex Optimization with Large Minibatches
2017-09-25Unverified0· sign in to hype
Weiran Wang, Nathan Srebro
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study stochastic optimization of nonconvex loss functions, which are typical objectives for training neural networks. We propose stochastic approximation algorithms which optimize a series of regularized, nonlinearized losses on large minibatches of samples, using only first-order gradient information. Our algorithms provably converge to an approximate critical point of the expected objective with faster rates than minibatch stochastic gradient descent, and facilitate better parallelization by allowing larger minibatches.