SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 5175 of 180 papers

TitleStatusHype
GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion RecognitionCode1
A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS datasetCode1
COGMEN: COntextualized GNN based Multimodal Emotion recognitioNCode1
Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion RecognitionCode1
Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological SignalsCode1
EmoVerse: Exploring Multimodal Large Language Models for Sentiment and Emotion UnderstandingCode1
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition0
Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals0
Context-aware Cascade Attention-based RNN for Video Emotion Recognition0
A Novel Approach to for Multimodal Emotion Recognition : Multimodal semantic information fusion0
Emotion recognition by fusing time synchronous and time asynchronous representations0
Empathy Through Multimodality in Conversational Interfaces0
Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning0
An Empirical Study and Improvement for Speech Emotion Recognition0
EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle0
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition0
Adversarial Representation with Intra-Modal and Inter-Modal Graph Contrastive Learning for Multimodal Emotion Recognition0
Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition0
EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network0
EMOE: Modality-Specific Enhanced Dynamic Emotion Experts0
EMERSK -- Explainable Multimodal Emotion Recognition with Situational Knowledge0
Early Joint Learning of Emotion Information Makes MultiModal Model Understand You Better0
CMATH: Cross-Modality Augmented Transformer with Hierarchical Variational Distillation for Multimodal Emotion Recognition in Conversation0
An Audio-Video Deep and Transfer Learning Framework for Multimodal Emotion Recognition in the wild0
Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified