SOTAVerified

FSSUAVL: A Discriminative Framework using Vision Models for Federated Self-Supervised Audio and Image Understanding

2025-04-13Unverified0· sign in to hype

Yasar Abbas Ur Rehman, Kin Wai Lau, Yuyang Xie, Ma Lan, Jiajun Shen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent studies have demonstrated that vision models can effectively learn multimodal audio-image representations when paired. However, the challenge of enabling deep models to learn representations from unpaired modalities remains unresolved. This issue is especially pertinent in scenarios like Federated Learning (FL), where data is often decentralized, heterogeneous, and lacks a reliable guarantee of paired data. Previous attempts tackled this issue through the use of auxiliary pretrained encoders or generative models on local clients, which invariably raise computational cost with increasing number modalities. Unlike these approaches, in this paper, we aim to address the task of unpaired audio and image recognition using FSSUAVL, a single deep model pretrained in FL with self-supervised contrastive learning (SSL). Instead of aligning the audio and image modalities, FSSUAVL jointly discriminates them by projecting them into a common embedding space using contrastive SSL. This extends the utility of FSSUAVL to paired and unpaired audio and image recognition tasks. Our experiments with CNN and ViT demonstrate that FSSUAVL significantly improves performance across various image- and audio-based downstream tasks compared to using separate deep models for each modality. Additionally, FSSUAVL's capacity to learn multimodal feature representations allows for integrating auxiliary information, if available, to enhance recognition accuracy.

Tasks

Reproductions