SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 101150 of 180 papers

TitleStatusHype
Modality-Collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion RecognitionCode0
DER-GCN: Dialogue and Event Relation-Aware Graph Convolutional Neural Network for Multimodal Dialogue Emotion Recognition0
Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations0
Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition0
A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning0
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosCode0
Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 20230
Leveraging Label Information for Multimodal Emotion Recognition0
A Unified Transformer-based Network for multimodal Emotion Recognition0
Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition0
Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning0
TACOformer:Token-channel compounded Cross Attention for Multimodal Emotion Recognition0
A Comparison of Time-based Models for Multimodal Emotion Recognition0
EMERSK -- Explainable Multimodal Emotion Recognition with Situational Knowledge0
Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus0
Modality Influence in Multimodal Machine Learning0
Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals0
Versatile audio-visual learning for emotion recognition0
Noise-Resistant Multimodal Transformer for Emotion Recognition0
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition0
An Empirical Study and Improvement for Speech Emotion Recognition0
Using Auxiliary Tasks In Multimodal Fusion Of Wav2vec 2.0 And BERT For Multimodal Emotion Recognition0
Knowledge-aware Bayesian Co-attention for Multimodal Emotion Recognition0
cross-modal fusion techniques for utterance-level emotion recognition from text and speech0
CSAT‑FTCN: A Fuzzy‑Oriented Model with Contextual Self‑attention Network for Multimodal Emotion Recognition0
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring0
Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals0
Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches0
FAF: A novel multimodal emotion recognition approach integrating face, body and text0
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features0
Multilevel Transformer For Multimodal Emotion Recognition0
Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image DataCode0
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion RecognitionCode0
A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for Debiasing in Multimodal Conversational Emotion Recognition0
Multi-level Fusion of Wav2vec 2.0 and BERT for Multimodal Emotion RecognitionCode0
0/1 Deep Neural Networks via Block Coordinate Descent0
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition0
Do Multimodal Emotion Recognition Models Tackle Ambiguity?0
Bias and Fairness on Multimodal Emotion Detection Algorithms0
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition0
Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models0
Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors0
LMR-CBT: Learning Modality-fused Representations with CB-Transformer for Multimodal Emotion Recognition from Unaligned Multimodal Sequences0
Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning0
Multimodal End-to-End Group Emotion Recognition using Cross-Modal Attention0
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition0
Multimodal Emotion-Cause Pair Extraction in Conversations0
Using Large Pre-Trained Models with Cross-Modal Attention for Multi-Modal Emotion Recognition0
Progressive Modality Reinforcement for Human Multimodal Emotion Recognition From Unaligned Multimodal Sequences0
Analyzing the Influence of Dataset Composition for Emotion Recognition0
Show:102550
← PrevPage 3 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified