SOTAVerified

Asynchronous Stochastic Optimization Robust to Arbitrary Delays

2021-06-22NeurIPS 2021Unverified0· sign in to hype

Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t - d_t for some arbitrary delay d_t. This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O( ^2/^4 + /^2 ) steps for finding an -stationary point x, where is the average delay 1T_t=1^T d_t and ^2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay _t d_t, that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.

Tasks

Reproductions