MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
2022-03-31Code Available1· sign in to hype
Sreyan Ghosh, Utkarsh Tyagi, S Ramaneswaran, Harshvardhan Srivastava, Dinesh Manocha
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/sreyan88/mmerOfficialIn paperpytorch★ 81
Abstract
In this paper, we propose MMER, a novel Multimodal Multi-task learning approach for Speech Emotion Recognition. MMER leverages a novel multimodal network based on early-fusion and cross-modal self-attention between text and acoustic modalities and solves three novel auxiliary tasks for learning emotion recognition from spoken utterances. In practice, MMER outperforms all our baselines and achieves state-of-the-art performance on the IEMOCAP benchmark. Additionally, we conduct extensive ablation studies and results analysis to prove the effectiveness of our proposed approach.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| IEMOCAP-4 | MMER | Accuracy | 81.7 | — | Unverified |