SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 51100 of 180 papers

TitleStatusHype
Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum0
MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion RecognitionCode3
Cooperative Sentiment Agents for Multimodal Sentiment AnalysisCode1
Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities0
Deep CNN with late fusion for realtime multimodal emotion recognition0
MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wildCode2
Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios0
MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language ModelsCode1
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause0
Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion RecognitionCode1
Joint Multimodal Transformer for Emotion Recognition in the WildCode1
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion RecognitionCode1
Multi-Modal Emotion Recognition by Text, Speech and Video Using Pretrained Transformers0
A Two-Stage Multimodal Emotion Recognition Model Based on Graph Contrastive Learning0
MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Distillation0
Adversarial Representation with Intra-Modal and Inter-Modal Graph Contrastive Learning for Multimodal Emotion Recognition0
Modality-Collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion RecognitionCode0
DER-GCN: Dialogue and Event Relation-Aware Graph Convolutional Neural Network for Multimodal Dialogue Emotion Recognition0
Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations0
GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion RecognitionCode1
Towards Emotion Analysis in Short-form Videos: A Large-Scale Dataset and BaselineCode1
Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimodal Emotion RecognitionCode1
Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition0
Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality InteractionCode1
A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in ConversationsCode1
A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning0
Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological SignalsCode1
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosCode0
Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 20230
Leveraging Label Information for Multimodal Emotion Recognition0
A Unified Transformer-based Network for multimodal Emotion Recognition0
Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition0
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion RecognitionCode1
Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning0
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party ConversationsCode1
TACOformer:Token-channel compounded Cross Attention for Multimodal Emotion Recognition0
A Comparison of Time-based Models for Multimodal Emotion Recognition0
EMERSK -- Explainable Multimodal Emotion Recognition with Situational Knowledge0
Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus0
Modality Influence in Multimodal Machine Learning0
Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals0
Versatile audio-visual learning for emotion recognition0
Noise-Resistant Multimodal Transformer for Emotion Recognition0
MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised LearningCode2
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition0
An Empirical Study and Improvement for Speech Emotion Recognition0
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Using Auxiliary Tasks In Multimodal Fusion Of Wav2vec 2.0 And BERT For Multimodal Emotion Recognition0
Knowledge-aware Bayesian Co-attention for Multimodal Emotion Recognition0
cross-modal fusion techniques for utterance-level emotion recognition from text and speech0
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified