SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 151175 of 180 papers

TitleStatusHype
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition0
Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 20230
Inconsistency-Aware Cross-Attention for Audio-Visual Fusion in Dimensional Emotion Recognition0
Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors0
Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals0
Investigating EEG-Based Functional Connectivity Patterns for Multimodal Emotion Recognition0
Knowledge-aware Bayesian Co-attention for Multimodal Emotion Recognition0
Leveraging Label Information for Multimodal Emotion Recognition0
Leveraging Label Potential for Enhanced Multimodal Emotion Recognition0
LLM supervised Pre-training for Multimodal Emotion Recognition in Conversations0
LMR-CBT: Learning Modality-fused Representations with CB-Transformer for Multimodal Emotion Recognition from Unaligned Multimodal Sequences0
M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues0
MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Distillation0
Masked Graph Learning with Recurrent Alignment for Multimodal Emotion Recognition in Conversation0
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition0
MicroEmo: Time-Sensitive Multimodal Emotion Recognition with Micro-Expression Dynamics in Video Dialogues0
Modality Influence in Multimodal Machine Learning0
Multilevel Transformer For Multimodal Emotion Recognition0
Multimodal Emotion-Cause Pair Extraction in Conversations0
Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches0
Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party Conversation Contexts0
Multimodal Emotion Recognition based on Facial Expressions, Speech, and EEG0
Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios0
Multi-Modal Emotion Recognition by Text, Speech and Video Using Pretrained Transformers0
Multimodal Emotion Recognition for One-Minute-Gradual Emotion Challenge0
Show:102550
← PrevPage 7 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified