SOTAVerified

Byzantine-Robust Learning on Heterogeneous Datasets via Resampling

2020-09-28Unverified0· sign in to hype

Lie He, Sai Praneeth Karimireddy, Martin Jaggi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In Byzantine-robust distributed optimization, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages to the server. While this problem has received significant attention recently, most current defenses assume that the workers have identical data distribution. For realistic cases when the data across workers are heterogeneous (non-iid), we design new attacks that circumvent these defenses leading to significant loss of performance. We then propose a universal resampling scheme that addresses data heterogeneity at a negligible computational cost. We theoretically and experimentally validate our approach, showing that combining resampling with existing robust algorithms is effective against challenging attacks.

Tasks

Reproductions