SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 501550 of 2041 papers

TitleStatusHype
Re-Parameterization of Lightweight Transformer for On-Device Speech Emotion Recognition0
Emotion Classification of Children Expressions0
Emotion-Aware Interaction Design in Intelligent User Interface Using Multi-Modal Deep Learning0
Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors0
Emotional Images: Assessing Emotions in Images and Potential Biases in Generative Models0
Speaker Emotion Recognition: Leveraging Self-Supervised Models for Feature Extraction Using Wav2Vec2 and HuBERT0
Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age0
Semi-Supervised Self-Learning Enhanced Music Emotion Recognition0
Multi-modal Speech Emotion Recognition via Feature Distribution Adaptation NetworkCode0
Leaving Some Facial Features BehindCode0
EEG-based Multimodal Representation Learning for Emotion Recognition0
TGCA-PVT: Topic-Guided Context-Aware Pyramid Vision Transformer for Sticker Emotion RecognitionCode0
Improving Speech-based Emotion Recognition with Contextual Utterance Analysis and LLMs0
A Survey on Speech Large Language Models0
Emotion Recognition with Facial Attention and Objective Activation Functions0
Enhancing Multimodal Affective Analysis with Learned Live Comment Features0
MMDS: A Multimodal Medical Diagnosis System Integrating Image Analysis and Knowledge-based Departmental Consultation0
Regularized Xception for facial expression recognition with extra training data and step decay learning rateCode0
Investigating Effective Speaker Property Privacy Protection in Federated Learning for Speech Emotion Recognition0
Multi-View Multi-Task Modeling with Speech Foundation Models for Speech Forensic Tasks0
Enhancing Speech Emotion Recognition through Segmental Average Pooling of Self-Supervised Learning Features0
SeQuiFi: Mitigating Catastrophic Forgetting in Speech Emotion Recognition with Sequential Class-Finetuning0
EmotionCaps: Enhancing Audio Captioning Through Emotion-Augmented Data Generation0
Leveraging LLM Embeddings for Cross Dataset Label Alignment and Zero Shot Music Emotion PredictionCode0
Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis0
Can We Estimate Purchase Intention Based on Zero-shot Speech Emotion Recognition?0
Audio Explanation Synthesis with Generative Foundation ModelsCode0
A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition0
Context and System Fusion in Post-ASR Emotion Recognition with Large Language ModelsCode0
EmojiHeroVR: A Study on Facial Expression Recognition under Partial Occlusion from Head-Mounted DisplaysCode0
Visual Prompting in LLMs for Enhancing Emotion Recognition0
GCM-Net: Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network for Video Sentiment and Emotion Analysis0
Multi-Scale Temporal Transformer For Speech Emotion Recognition0
EEG Emotion Copilot: Optimizing Lightweight LLMs for Emotional EEG Interpretation with Assisted Medical Record GenerationCode0
Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions0
Self-supervised Auxiliary Learning for Texture and Model-based Hybrid Robust and Fair Featuring in Face Analysis0
Evaluation of OpenAI o1: Opportunities and Challenges of AGI0
UniEmoX: Cross-modal Semantic-Guided Large-Scale Pretraining for Universal Scene Emotion PerceptionCode0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
AER-LLM: Ambiguity-aware Emotion Recognition Leveraging Large Language Models0
Cross-Lingual Speech Emotion Recognition: Humans vs. Self-Supervised ModelsCode0
Semi-Supervised Cognitive State Classification from Speech with Multi-View Pseudo-LabelingCode0
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion RecognitionCode0
EvoFA: Evolvable Fast Adaptation for EEG Emotion Recognition0
Improving Emotion Recognition Accuracy with Personalized Clustering0
CA-MHFA: A Context-Aware Multi-Head Factorized Attentive Pooling for SSL-Based Speaker Verification0
Revise, Reason, and Recognize: LLM-Based Emotion Recognition via Emotion-Specific Prompts and ASR Error CorrectionCode0
Avengers Assemble: Amalgamation of Non-Semantic Features for Depression Detection0
Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition0
EmotionQueen: A Benchmark for Evaluating Empathy of Large Language Models0
Show:102550
← PrevPage 11 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified