SOTAVerified

Lower Bounds for Non-Convex Stochastic Optimization

2019-12-05Unverified0· sign in to hype

Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, Blake Woodworth

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We lower bound the complexity of finding -stationary points (with gradient norm at most ) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least ^-4 queries to find an stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of ^-3 queries, establishing the optimality of recently proposed variance reduction techniques.

Tasks

Reproductions