SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 5175 of 180 papers

TitleStatusHype
MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion RecognitionCode1
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature FusionCode1
COGMEN: COntextualized GNN based Multimodal Emotion recognitioNCode1
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross AttentionCode1
Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality InteractionCode1
Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion RecognitionCode1
Multi Teacher Privileged Knowledge Distillation for Multimodal Expression RecognitionCode0
Multimodal Sentiment Analysis using Hierarchical Fusion with Context ModelingCode0
Combining deep and unsupervised features for multilingual speech emotion recognitionCode0
Multimodal Speech Emotion Recognition and Ambiguity ResolutionCode0
Multimodal Speech Emotion Recognition Using Audio and TextCode0
Multimodal Emotion Recognition Using Deep Canonical Correlation AnalysisCode0
Multimodal Behavioral Markers Exploring Suicidal Intent in Social Media VideosCode0
Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep LearningCode0
Multi-level Fusion of Wav2vec 2.0 and BERT for Multimodal Emotion RecognitionCode0
Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment AnalysisCode0
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosCode0
Learning Alignment for Multimodal Emotion Recognition from SpeechCode0
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled SamplesCode0
Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image DataCode0
Attentive Modality Hopping Mechanism for Speech Emotion RecognitionCode0
Investigation of Multimodal Features, Classifiers and Fusion Methods for Emotion RecognitionCode0
Feature-Based Dual Visual Feature Extraction Model for Compound Multimodal Emotion RecognitionCode0
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion RecognitionCode0
End-to-End Multimodal Emotion Recognition using Deep Neural NetworksCode0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified