SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 76100 of 180 papers

TitleStatusHype
A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning0
Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological SignalsCode1
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosCode0
Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 20230
Leveraging Label Information for Multimodal Emotion Recognition0
A Unified Transformer-based Network for multimodal Emotion Recognition0
Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition0
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion RecognitionCode1
Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning0
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party ConversationsCode1
TACOformer:Token-channel compounded Cross Attention for Multimodal Emotion Recognition0
A Comparison of Time-based Models for Multimodal Emotion Recognition0
EMERSK -- Explainable Multimodal Emotion Recognition with Situational Knowledge0
Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus0
Modality Influence in Multimodal Machine Learning0
Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals0
Versatile audio-visual learning for emotion recognition0
Noise-Resistant Multimodal Transformer for Emotion Recognition0
MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised LearningCode2
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition0
An Empirical Study and Improvement for Speech Emotion Recognition0
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Using Auxiliary Tasks In Multimodal Fusion Of Wav2vec 2.0 And BERT For Multimodal Emotion Recognition0
Knowledge-aware Bayesian Co-attention for Multimodal Emotion Recognition0
cross-modal fusion techniques for utterance-level emotion recognition from text and speech0
Show:102550
← PrevPage 4 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified