SOTAVerified

Multimodal Emotion Recognition

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Papers

Showing 151180 of 180 papers

TitleStatusHype
Combining deep and unsupervised features for multilingual speech emotion recognitionCode0
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition0
Emotion recognition by fusing time synchronous and time asynchronous representations0
An Audio-Video Deep and Transfer Learning Framework for Multimodal Emotion Recognition in the wild0
Investigating EEG-Based Functional Connectivity Patterns for Multimodal Emotion Recognition0
EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle0
Attentive Modality Hopping Mechanism for Speech Emotion RecognitionCode0
Multimodal Affective States Recognition Based on Multiscale CNNs and Biologically Inspired Decision Fusion Model0
M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues0
Multimodal Behavioral Markers Exploring Suicidal Intent in Social Media VideosCode0
Learning Alignment for Multimodal Emotion Recognition from SpeechCode0
Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning0
Multimodal Emotion Recognition Using Deep Canonical Correlation AnalysisCode0
Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment AnalysisCode0
Multimodal Speech Emotion Recognition and Ambiguity ResolutionCode0
MULTI-MODAL EMOTION RECOGNITION ON IEMOCAP WITH NEURAL NETWORKS.0
Multimodal Speech Emotion Recognition Using Audio and TextCode0
ICON: Interactive Conversational Memory Network for Multimodal Emotion DetectionCode0
Investigation of Multimodal Features, Classifiers and Fusion Methods for Emotion RecognitionCode0
Multimodal Sentiment Analysis using Hierarchical Fusion with Context ModelingCode0
Context-aware Cascade Attention-based RNN for Video Emotion Recognition0
Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data0
Multimodal Emotion Recognition for One-Minute-Gradual Emotion Challenge0
Framewise approach in multimodal emotion recognition in OMG challenge0
Contextual Dependencies in Time-Continuous Multidimensional Affect Recognition0
Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep LearningCode0
Continuous Multimodal Emotion Recognition Approach for AVEC 20170
Context-Dependent Sentiment Analysis in User-Generated VideosCode0
End-to-End Multimodal Emotion Recognition using Deep Neural NetworksCode0
Multimodal Emotion Recognition Using Multimodal Deep Learning0
Show:102550
← PrevPage 4 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F186.52Unverified
2JoyfulWeighted F185.7Unverified
3COGMENWeighted F184.5Unverified
4DANNAccuracy82.7Unverified
5MMERAccuracy81.7Unverified
6PATHOSnet v2Accuracy80.4Unverified
7Self-attention weight correction (A+T)Accuracy76.8Unverified
8CHFusionAccuracy76.5Unverified
9bc-LSTMWeighted F174.1Unverified
10Audio + Text (Stage III)F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.71Unverified
2Audio + Text (Stage III)Weighted F165.8Unverified
3JoyfulWeighted F161.77Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F172.81Unverified
2JoyfulWeighted F170.5Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F144.93Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F166.73Unverified
#ModelMetricClaimedVerifiedStatus
1SMPLify-Xv2v error52.9Unverified
#ModelMetricClaimedVerifiedStatus
1GraphSmileWeighted F174.31Unverified