SOTAVerified

Fairness-aware Federated Learning

2021-09-29Unverified0· sign in to hype

Zhuozhuo Tu, Zhiqiang Xu, Tairan Huang, DaCheng Tao, Ping Li

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Federated Learning is a machine learning technique where a network of clients collaborates with a server to learn a centralized model while keeping data localized. In such a setting, naively minimizing an aggregate loss may introduce bias and disadvantage model performance on certain clients. To address this issue, we propose a new federated learning framework called FAFL in which the goal is to minimize the worst-case weighted client losses over an uncertainty set. By deriving a variational representation, we show that this framework is a fairness-aware objective and can be easily optimized by solving a joint minimization problem over the model parameters and a dual variable. We then propose an optimization algorithm to solve FAFL which can be efficiently implemented in a federated setting and provide convergence guarantees. We further prove generalization bounds for learning with this objective. Experiments on real-world datasets demonstrate the effectiveness of our framework in achieving both accuracy and fairness.

Tasks

Reproductions