SOTAVerified

Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities

2021-08-01ACL 2021Code Available1· sign in to hype

Jinming Zhao, Ruichen Li, Qin Jin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multimodal fusion has been proved to improve emotion recognition performance in previous works. However, in real-world applications, we often encounter the problem of missing modality, and which modalities will be missing is uncertain. It makes the fixed multimodal fusion fail in such cases. In this work, we propose a unified model, Missing Modality Imagination Network (MMIN), to deal with the uncertain missing modality problem. MMIN learns robust joint multimodal representations, which can predict the representation of any missing modality given available modalities under different missing modality conditions.Comprehensive experiments on two benchmark datasets demonstrate that the unified MMIN model significantly improves emotion recognition performance under both uncertain missing-modality testing conditions and full-modality ideal testing condition. The code will be available at https://github.com/AIM3-RUC/MMIN.

Tasks

Reproductions