SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 151180 of 180 papers

TitleStatusHype
Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party Conversation Contexts0
Multimodal Emotion Recognition based on Facial Expressions, Speech, and EEG0
Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios0
Multi-Modal Emotion Recognition by Text, Speech and Video Using Pretrained Transformers0
Multimodal Emotion Recognition for One-Minute-Gradual Emotion Challenge0
Multimodal Affective States Recognition Based on Multiscale CNNs and Biologically Inspired Decision Fusion Model0
Multimodal Behavioral Markers Exploring Suicidal Intent in Social Media VideosCode0
Multimodal Speech Emotion Recognition and Ambiguity ResolutionCode0
Multimodal Speech Emotion Recognition Using Audio and TextCode0
Multi Teacher Privileged Knowledge Distillation for Multimodal Expression RecognitionCode0
Investigation of Multimodal Features, Classifiers and Fusion Methods for Emotion RecognitionCode0
Context-Dependent Sentiment Analysis in User-Generated VideosCode0
TACFN: Transformer-based Adaptive Cross-modal Fusion Network for Multimodal Emotion RecognitionCode0
Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image DataCode0
ICON: Interactive Conversational Memory Network for Multimodal Emotion DetectionCode0
Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep LearningCode0
Combining deep and unsupervised features for multilingual speech emotion recognitionCode0
Attentive Modality Hopping Mechanism for Speech Emotion RecognitionCode0
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion RecognitionCode0
Multi-level Fusion of Wav2vec 2.0 and BERT for Multimodal Emotion RecognitionCode0
Multimodal Emotion Recognition Using Deep Canonical Correlation AnalysisCode0
Feature-Based Dual Visual Feature Extraction Model for Compound Multimodal Emotion RecognitionCode0
Modality-Collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion RecognitionCode0
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled SamplesCode0
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosCode0
End-to-End Multimodal Emotion Recognition using Deep Neural NetworksCode0
Textualized and Feature-based Models for Compound Multimodal Emotion Recognition in the WildCode0
Learning Alignment for Multimodal Emotion Recognition from SpeechCode0
Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment AnalysisCode0
Multimodal Sentiment Analysis using Hierarchical Fusion with Context ModelingCode0
Show:102550
← PrevPage 4 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified