SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 401425 of 2041 papers

TitleStatusHype
INTERSPEECH 2009 Emotion Challenge Revisited: Benchmarking 15 Years of Progress in Speech Emotion RecognitionCode0
Enrolment-based personalisation for improving individual-level fairness in speech emotion recognitionCode0
Emo-bias: A Large Scale Evaluation of Social Bias on Speech Emotion Recognition0
Think out Loud: Emotion Deducing Explanation in Dialogues0
Are Large Language Models More Empathetic than Humans?0
BLSP-Emo: Towards Empathetic Large Speech-Language ModelsCode2
Evaluation of data inconsistency for multi-modal sentiment analysis0
Multi-Microphone Speech Emotion Recognition using the Hierarchical Token-semantic Audio Transformer Architecture0
E-ICL: Enhancing Fine-Grained Emotion Recognition through the Lens of Prototype Theory0
Combining Qualitative and Computational Approaches for Literary Analysis of Finnish Novels0
Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion RecognitionCode0
1st Place Solution to Odyssey Emotion Recognition Challenge Task1: Tackling Class Imbalance Problem0
Iterative Feature Boosting for Explainable Speech Emotion RecognitionCode0
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCICode4
Exploring Thermography Technology: A Comprehensive Facial Dataset for Face Detection, Recognition, and Emotion0
Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition0
Enhancing Emotion Recognition in Conversation through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning0
Beware of Overestimated Decoding Performance Arising from Temporal Autocorrelations in Electroencephalogram Signals0
Detail-Enhanced Intra- and Inter-modal Interaction for Audio-Visual Emotion Recognition0
Crossmodal ASR Error Correction with Discrete Speech UnitsCode0
ComFace: Facial Representation Learning with Synthetic Data for Comparing Faces0
ST-Gait++: Leveraging spatio-temporal convolutions for gait-based emotion recognition on videos0
Inconsistency-Aware Cross-Attention for Audio-Visual Fusion in Dimensional Emotion Recognition0
SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in ConversationsCode1
Transformer based neural networks for emotion recognition in conversationsCode0
Show:102550
← PrevPage 17 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified