SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 126150 of 180 papers

TitleStatusHype
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring0
Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals0
Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches0
FAF: A novel multimodal emotion recognition approach integrating face, body and text0
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features0
Multilevel Transformer For Multimodal Emotion Recognition0
Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image DataCode0
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion RecognitionCode0
A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for Debiasing in Multimodal Conversational Emotion Recognition0
Multi-level Fusion of Wav2vec 2.0 and BERT for Multimodal Emotion RecognitionCode0
0/1 Deep Neural Networks via Block Coordinate Descent0
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition0
Do Multimodal Emotion Recognition Models Tackle Ambiguity?0
Bias and Fairness on Multimodal Emotion Detection Algorithms0
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition0
Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models0
Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors0
LMR-CBT: Learning Modality-fused Representations with CB-Transformer for Multimodal Emotion Recognition from Unaligned Multimodal Sequences0
Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning0
Multimodal End-to-End Group Emotion Recognition using Cross-Modal Attention0
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition0
Multimodal Emotion-Cause Pair Extraction in Conversations0
Using Large Pre-Trained Models with Cross-Modal Attention for Multi-Modal Emotion Recognition0
Progressive Modality Reinforcement for Human Multimodal Emotion Recognition From Unaligned Multimodal Sequences0
Analyzing the Influence of Dataset Composition for Emotion Recognition0
Show:102550
← PrevPage 6 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified