SOTAVerified

FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated Learning

2025-03-15Unverified0· sign in to hype

Binghui Zhang, Luis Mares De La Cruz, Binghui Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Federated Learning (FL) is an emerging decentralized learning paradigm that can partly address the privacy concern that cannot be handled by traditional centralized and distributed learning. Further, to make FL practical, it is also necessary to consider constraints such as fairness and robustness. However, existing robust FL methods often produce unfair models, and existing fair FL methods only consider one-level (client) fairness and are not robust to persistent outliers (i.e., injected outliers into each training round) that are common in real-world FL settings. We propose FedTilt, a novel FL that can preserve multi-level fairness and be robust to outliers. In particular, we consider two common levels of fairness, i.e., client fairness -- uniformity of performance across clients, and client data fairness -- uniformity of performance across different classes of data within a client. FedTilt is inspired by the recently proposed tilted empirical risk minimization, which introduces tilt hyperparameters that can be flexibly tuned. Theoretically, we show how tuning tilt values can achieve the two-level fairness and mitigate the persistent outliers, and derive the convergence condition of FedTilt as well. Empirically, our evaluation results on a suite of realistic federated datasets in diverse settings show the effectiveness and flexibility of the FedTilt framework and the superiority to the state-of-the-arts.

Tasks

Reproductions