SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 110 of 180 papers

TitleStatusHype
A Robust Incomplete Multimodal Low-Rank Adaptation Approach for Emotion Recognition0
GSDNet: Revisiting Incomplete Multimodal-Diffusion from Graph Spectrum Perspective for Conversation Emotion Recognition0
Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution ShiftsCode1
Multimodal Mixture of Low-Rank Experts for Sentiment Analysis and Emotion Recognition0
TACFN: Transformer-based Adaptive Cross-modal Fusion Network for Multimodal Emotion RecognitionCode0
PsyCounAssist: A Full-Cycle AI-Powered Psychological Counseling Assistant System0
Leveraging Label Potential for Enhanced Multimodal Emotion Recognition0
BeMERC: Behavior-Aware MLLM-based Framework for Multimodal Emotion Recognition in Conversation0
Unimodal-driven Distillation in Multimodal Emotion Recognition with Dynamic Fusion0
GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations0
Show:102550
← PrevPage 1 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified