SOTAVerified

FACT: Federated Adversarial Cross Training

2023-06-01Code Available0· sign in to hype

Stefan Schrod, Jonas Lippl, Andreas Schäfer, Michael Altenbuchinger

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Federated Learning (FL) facilitates distributed model development to aggregate multiple confidential data sources. The information transfer among clients can be compromised by distributional differences, i.e., by non-i.i.d. data. A particularly challenging scenario is the federated model adaptation to a target client without access to annotated data. We propose Federated Adversarial Cross Training (FACT), which uses the implicit domain differences between source clients to identify domain shifts in the target domain. In each round of FL, FACT cross initializes a pair of source clients to generate domain specialized representations which are then used as a direct adversary to learn a domain invariant data representation. We empirically show that FACT outperforms state-of-the-art federated, non-federated and source-free domain adaptation models on three popular multi-source-single-target benchmarks, and state-of-the-art Unsupervised Domain Adaptation (UDA) models on single-source-single-target experiments. We further study FACT's behavior with respect to communication restrictions and the number of participating clients.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MNIST-to-USPSFACTAccuracy98.8Unverified
SVHN-to-MNISTFACTAccuracy90.6Unverified
USPS-to-MNISTFACTAccuracy98.6Unverified

Reproductions