SOTAVerified

XM-ALIGN: Unified Cross-Modal Embedding Alignment for Face-Voice Association

2025-12-07Code Available0· sign in to hype

Zhihua Fang, Shumei Tao, Junxu Wang, Liang He

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces our solution, XM-ALIGN (Unified Cross-Modal Embedding Alignment Framework), proposed for the FAME challenge at ICASSP 2026. Our framework combines explicit and implicit alignment mechanisms, significantly improving cross-modal verification performance in both "heard" and "unheard" languages. By extracting feature embeddings from both face and voice encoders and jointly optimizing them using a shared classifier, we employ mean squared error (MSE) as the embedding alignment loss to ensure tight alignment between modalities. Additionally, data augmentation strategies are applied during model training to enhance generalization. Experimental results show that our approach demonstrates superior performance on the MAV-Celeb dataset. The code will be released at https://github.com/PunkMale/XM-ALIGN.

Reproductions