SOTAVerified

Geometry, Computation, and Optimality in Stochastic Optimization

2019-09-23NeurIPS 2019Unverified0· sign in to hype

Chen Cheng, Daniel Levy, John C. Duchi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study computational and statistical consequences of problem geometry in stochastic and online optimization. By focusing on constraint set and gradient geometry, we characterize the problem families for which stochastic- and adaptive-gradient methods are (minimax) optimal and, conversely, when nonlinear updates -- such as those mirror descent employs -- are necessary for optimal convergence. When the constraint set is quadratically convex, diagonally pre-conditioned stochastic gradient methods are minimax optimal. We provide quantitative converses showing that the ``distance'' of the underlying constraints from quadratic convexity determines the sub-optimality of subgradient methods. These results apply, for example, to any _p-ball for p < 2, and the computation/accuracy tradeoffs they demonstrate exhibit a striking analogy to those in Gaussian sequence models.

Tasks

Reproductions