SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 251275 of 2041 papers

TitleStatusHype
Disentangling Textual and Acoustic Features of Neural Speech RepresentationsCode1
Visual Prompting in LLMs for Enhancing Emotion Recognition0
FastAdaSP: Multitask-Adapted Efficient Inference for Large Speech Language ModelCode1
GCM-Net: Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network for Video Sentiment and Emotion Analysis0
Multi-Scale Temporal Transformer For Speech Emotion Recognition0
Do Music Generation Models Encode Music Theory?Code1
EEG Emotion Copilot: Optimizing Lightweight LLMs for Emotional EEG Interpretation with Assisted Medical Record GenerationCode0
Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions0
Self-supervised Auxiliary Learning for Texture and Model-based Hybrid Robust and Fair Featuring in Face Analysis0
Evaluation of OpenAI o1: Opportunities and Challenges of AGI0
UniEmoX: Cross-modal Semantic-Guided Large-Scale Pretraining for Universal Scene Emotion PerceptionCode0
AER-LLM: Ambiguity-aware Emotion Recognition Leveraging Large Language Models0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
Cross-Lingual Speech Emotion Recognition: Humans vs. Self-Supervised ModelsCode0
Semi-Supervised Cognitive State Classification from Speech with Multi-View Pseudo-LabelingCode0
EvoFA: Evolvable Fast Adaptation for EEG Emotion Recognition0
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion RecognitionCode0
Improving Emotion Recognition Accuracy with Personalized Clustering0
CA-MHFA: A Context-Aware Multi-Head Factorized Attentive Pooling for SSL-Based Speaker Verification0
Addressing Emotion Bias in Music Emotion Recognition and Generation with Frechet Audio DistanceCode3
Revise, Reason, and Recognize: LLM-Based Emotion Recognition via Emotion-Specific Prompts and ASR Error CorrectionCode0
Avengers Assemble: Amalgamation of Non-Semantic Features for Depression Detection0
Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition0
EmotionQueen: A Benchmark for Evaluating Empathy of Large Language Models0
Improving Speech Emotion Recognition in Under-Resourced Languages via Speech-to-Speech Translation with Bootstrapping Data SelectionCode0
Show:102550
← PrevPage 11 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified