SOTAVerified

Multi-modal Speech Emotion Recognition via Feature Distribution Adaptation Network

2024-10-29Code Available0· sign in to hype

Shaokai Li, Yixuan Ji, Peng Song, Haoqin Sun, Wenming Zheng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we propose a novel deep inductive transfer learning framework, named feature distribution adaptation network, to tackle the challenging multi-modal speech emotion recognition problem. Our method aims to use deep transfer learning strategies to align visual and audio feature distributions to obtain consistent representation of emotion, thereby improving the performance of speech emotion recognition. In our model, the pre-trained ResNet-34 is utilized for feature extraction for facial expression images and acoustic Mel spectrograms, respectively. Then, the cross-attention mechanism is introduced to model the intrinsic similarity relationships of multi-modal features. Finally, the multi-modal feature distribution adaptation is performed efficiently with feed-forward network, which is extended using the local maximum mean discrepancy loss. Experiments are carried out on two benchmark datasets, and the results demonstrate that our model can achieve excellent performance compared with existing ones.

Tasks

Reproductions