SOTAVerified

Multi-consensus Decentralized Accelerated Gradient Descent

2020-05-02Unverified0· sign in to hype

Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper considers the decentralized convex optimization problem, which has a wide range of applications in large-scale machine learning, sensor networks, and control theory. We propose novel algorithms that achieve optimal computation complexity and near optimal communication complexity. Our theoretical results give affirmative answers to the open problem on whether there exists an algorithm that can achieve a communication complexity (nearly) matching the lower bound depending on the global condition number instead of the local one. Furthermore, the linear convergence of our algorithms only depends on the strong convexity of global objective and it does not require the local functions to be convex. The design of our methods relies on a novel integration of well-known techniques including Nesterov's acceleration, multi-consensus and gradient-tracking. Empirical studies show the outperformance of our methods for machine learning applications.

Tasks

Reproductions