SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 20012041 of 2041 papers

TitleStatusHype
EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction0
EMOVO Corpus: an Italian Emotional Speech Database0
EmoWordNet: Automatic Expansion of Emotion Lexicon Using English WordNet0
EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems0
Empathetic Conversational Agents: Utilizing Neural and Physiological Signals for Enhanced Empathetic Interactions0
Empathy and Distress Prediction using Transformer Multi-output Regression and Emotion Analysis with an Ensemble of Supervised and Zero-Shot Learning Models0
Empathy Through Multimodality in Conversational Interfaces0
Empirical Analysis of Asynchronous Federated Learning on Heterogeneous Devices: Efficiency, Fairness, and Privacy Trade-offs0
Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition0
Empirical Interpretation of the Relationship Between Speech Acoustic Context and Emotion Recognition0
Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis0
EMTC: Multilabel Corpus in Movie Domain for Emotion Analysis in Conversational Text0
Enabling Deep Learning of Emotion With First-Person Seed Expressions0
End-to-End Continuous Speech Emotion Recognition in Real-life Customer Service Call Center Conversations0
End-to-End Emotional Speech Synthesis Using Style Tokens and Semi-Supervised Training0
End-to-end facial and physiological model for Affective Computing and applications0
End-to-End Speech Emotion Recognition: Challenges of Real-Life Emergency Call Centers Data Recordings0
End-to-end transfer learning for speaker-independent cross-language and cross-corpus speech emotion recognition0
English Prompts are Better for NLI-based Zero-Shot Emotion Classification than Target-Language Prompts0
Enhanced Speech Emotion Recognition with Efficient Channel Attention Guided Deep CNN-BiLSTM Framework0
Enhancing Emotion Recognition in Conversation through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning0
Enhancing Emotion Recognition in Incomplete Data: A Novel Cross-Modal Alignment, Reconstruction, and Refinement Framework0
Enhancing Facial Expression Recognition through Dual-Direction Attention Mixed Feature Networks: Application to 7th ABAW Challenge0
Enhancing Higher Education with Generative AI: A Multimodal Approach for Personalised Learning0
Enhancing Multi-Label Emotion Analysis and Corresponding Intensities for Ethiopian Languages0
Enhancing Multimodal Affective Analysis with Learned Live Comment Features0
Enhancing Multimodal Emotion Recognition through Multi-Granularity Cross-Modal Alignment0
Enhancing Segment-Based Speech Emotion Recognition by Deep Self-Learning0
Enhancing Speech Emotion Recognition through Segmental Average Pooling of Self-Supervised Learning Features0
Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 20250
Enhancing Student Engagement in Online Learning through Facial Expression Analysis and Complex Emotion Recognition using Deep Learning0
Enhancing Emotional Generation Capability of Large Language Models via Emotional Chain-of-Thought0
Ensemble emotion recognizing with multiple modal physiological signals0
Ensemble knowledge distillation of self-supervised speech models0
Ensemble of Hankel Matrices for Face Emotion Recognition0
Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label Regression Emotion Share from Speech0
Entropy-Assisted Multi-Modal Emotion Recognition Framework Based on Physiological Signals0
ERIT Lightweight Multimodal Dataset for Elderly Emotion Recognition and Multimodal Fusion Evaluation0
ESIHGNN: Event-State Interactions Infused Heterogeneous Graph Neural Network for Conversational Emotion Recognition0
ESTformer: Transformer Utilizing Spatiotemporal Dependencies for Electroencaphalogram Super-resolution0
Estimating the Uncertainty in Emotion Class Labels with Utterance-Specific Dirichlet Priors0
Show:102550
← PrevPage 41 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified