SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 176200 of 2041 papers

TitleStatusHype
Affective Behaviour Analysis Using Pretrained Model with Facial PrioriCode1
Multimodal Emotion Recognition with Modality-Pairwise Unsupervised Contrastive LossCode1
Self-supervised Group Meiosis Contrastive Learning for EEG-Based Emotion RecognitionCode1
GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion RecognitionCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and StressCode1
The Emotion is Not One-hot Encoding: Learning with Grayscale Label for Emotion Recognition in ConversationCode1
A Multimodal Corpus for Emotion Recognition in SarcasmCode1
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog DomainCode1
M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue DatabaseCode1
EmotionFlow: Capture the Dialogue Level Emotion TransitionsCode1
COGMEN: COntextualized GNN based Multimodal Emotion recognitioNCode1
Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature RepresentationCode1
GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion RecognitionCode1
Engagement Detection with Multi-Task Training in E-Learning EnvironmentsCode1
MMER: Multimodal Multi-task Learning for Speech Emotion RecognitionCode1
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationCode1
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-LabelingCode1
MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in ConversationsCode1
Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG SignalsCode1
Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responsesCode1
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?Code1
PARSE: Pairwise Alignment of Representations in Semi-Supervised EEG Learning for Emotion RecognitionCode1
Show:102550
← PrevPage 8 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified