SOTAVerified

Fast and Robust Distributed Learning in High Dimension

2019-05-05Unverified0· sign in to hype

El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Could a gradient aggregation rule (GAR) for distributed machine learning be both robust and fast? This paper answers by the affirmative through multi-Bulyan. Given n workers, f of which are arbitrary malicious (Byzantine) and m=n-f are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an mn slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning. When m n (almost all workers are correct), multi-Bulyan reaches the speed of averaging. We also prove that multi-Bulyan's cost in local computation is O(d) (like averaging), an important feature for ML where d commonly reaches 10^9, while robust alternatives have at least quadratic cost in d. Our theoretical findings are complemented with an experimental evaluation which, in addition to supporting the linear O(d) complexity argument, conveys the fact that multi-Bulyan's parallelisability further adds to its efficiency.

Tasks

Reproductions