SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 2650 of 2041 papers

TitleStatusHype
Investigating the Impact of Word Informativeness on Speech Emotion Recognition0
Towards Machine Unlearning for Paralinguistic Speech Processing0
Are Mamba-based Audio Foundation Models the Best Fit for Non-Verbal Emotion Recognition?0
EfficientFER: EfficientNetv2 Based Deep Learning Approach for Facial Expression RecognitionCode1
Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 20250
Source Tracing of Synthetic Speech Systems Through Paralinguistic Pre-Trained Representations0
PARROT: Synergizing Mamba and Attention-based SSL Pre-Trained Models via Parallel Branch Hadamard Optimal Transport for Speech Emotion Recognition0
Learning More with Less: Self-Supervised Approaches for Low-Resource Speech Emotion Recognition0
MELT: Towards Automated Multimodal Emotion Data Annotation by Leveraging LLM Embedded KnowledgeCode0
KEVER^2: Knowledge-Enhanced Visual Emotion Reasoning and Retrieval0
Can Emotion Fool Anti-spoofing?0
What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App ReviewsCode0
Learning Annotation Consensus for Continuous Emotion Recognition0
Inceptive Transformers: Enhancing Contextual Representations through Multi-Scale Feature Learning Across Domains and Languages0
EmoNet-Face: An Expert-Annotated Benchmark for Synthetic Emotion Recognition0
Knowledge-Aligned Counterfactual-Enhancement Diffusion Perception for Unsupervised Cross-Domain Visual Emotion Recognition0
EmoSphere-SER: Enhancing Speech Emotion Recognition Through Spherical Representation with Auxiliary ClassificationCode2
Improving Speech Emotion Recognition Through Cross Modal Attention Alignment and Balanced Stacking ModelCode0
ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs0
Contrastive Distillation of Emotion Knowledge from LLMs for Zero-Shot Emotion RecognitionCode0
CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-trainingCode11
Audio-to-Audio Emotion Conversion With Pitch And Duration Style Transfer0
ABHINAYA -- A System for Speech Emotion Recognition In Naturalistic Conditions ChallengeCode0
Meta-PerSER: Few-Shot Listener Personalized Speech Emotion Recognition via Meta-learning0
MIKU-PAL: An Automated and Standardized Multi-Modal Method for Speech Paralinguistic and Affect Labeling0
Show:102550
← PrevPage 2 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified