SOTAVerified

Federated Learning Framework via Distributed Mutual Learning

2025-03-03Unverified0· sign in to hype

Yash Gupta

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Federated Learning often relies on sharing full or partial model weights, which can burden network bandwidth and raise privacy risks. We present a loss-based alternative using distributed mutual learning. Instead of transmitting weights, clients periodically share their loss predictions on a public test set. Each client then refines its model by combining its local loss with the average Kullback-Leibler divergence over losses from other clients. This collaborative approach both reduces transmission overhead and preserves data privacy. Experiments on a face mask detection task demonstrate that our method outperforms weight-sharing baselines, achieving higher accuracy on unseen data while providing stronger generalization and privacy benefits.

Tasks

Reproductions