SOTAVerified

Communication-Efficient and Robust Multi-Modal Federated Learning via Latent-Space Consensus

2026-03-19Unverified0· sign in to hype

Mohamed Badi, Chaouki Ben Issaid, Mehdi Bennis

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Federated learning (FL) enables collaborative model training across distributed devices without sharing raw data, but applying FL to multi-modal settings introduces significant challenges. Clients typically possess heterogeneous modalities and model architectures, making it difficult to align feature spaces efficiently while preserving privacy and minimizing communication costs. To address this, we introduce CoMFed, a Communication-Efficient Multi-Modal Federated Learning framework that uses learnable projection matrices to generate compressed latent representations. A latent-space regularizer aligns these representations across clients, improving cross-modal consistency and robustness to outliers. Experiments on human activity recognition benchmarks show that CoMFed achieves competitive accuracy with minimal overhead.

Reproductions