SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 251275 of 2041 papers

TitleStatusHype
Semi-supervised music emotion recognition using noisy student training and harmonic pitch class profilesCode1
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning SettingsCode1
SERAB: A multi-lingual benchmark for speech emotion recognitionCode1
SER Evals: In-domain and Out-of-domain Benchmarking for Speech Emotion RecognitionCode1
Cluster-Level Contrastive Learning for Emotion Recognition in ConversationsCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
CMCRD: Cross-Modal Contrastive Representation Distillation for Emotion RecognitionCode1
CoMPM: Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion RecognitionCode1
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion RecognitionCode1
Codified audio language modeling learns useful representations for music information retrievalCode1
COGMEN: COntextualized GNN based Multimodal Emotion recognitioNCode1
Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG SignalsCode1
Context Based Emotion Recognition using EMOTIC DatasetCode1
Contextual Information and Commonsense Based Prompt for Emotion Recognition in ConversationCode1
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion RecognitionCode1
CAGE: Circumplex Affect Guided Expression InferenceCode1
Continuous Emotion Recognition with Audio-visual Leader-follower Attentive FusionCode1
ChatGPT: Jack of all trades, master of noneCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-AttentionCode1
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Deep Multilayer Perceptrons for Dimensional Speech Emotion RecognitionCode1
Crowdsourced and Automatic Speech Prominence EstimationCode1
Show:102550
← PrevPage 11 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified