SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 150 of 431 papers

TitleStatusHype
Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in ConversationCode0
MATER: Multi-level Acoustic and Textual Emotion Representation for Interpretable Speech Emotion Recognition0
Developing a High-performance Framework for Speech Emotion Recognition in Naturalistic Conditions Challenge for Emotional Attribute Prediction0
MEDUSA: A Multimodal Deep Fusion Multi-Stage Training Framework for Speech Emotion Recognition in Naturalistic ConditionsCode0
Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion RecognitionCode0
CO-VADA: A Confidence-Oriented Voice Augmentation Debiasing Approach for Fair Speech Emotion Recognition0
EMO-Debias: Benchmarking Gender Debiasing Techniques in Multi-Label Speech Emotion Recognition0
HYFuse: Aligning Heterogeneous Speech Pre-Trained Representations in Hyperbolic Space for Speech Emotion Recognition0
Investigating the Impact of Word Informativeness on Speech Emotion Recognition0
Are Mamba-based Audio Foundation Models the Best Fit for Non-Verbal Emotion Recognition?0
Towards Machine Unlearning for Paralinguistic Speech Processing0
Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 20250
Learning More with Less: Self-Supervised Approaches for Low-Resource Speech Emotion Recognition0
Source Tracing of Synthetic Speech Systems Through Paralinguistic Pre-Trained Representations0
PARROT: Synergizing Mamba and Attention-based SSL Pre-Trained Models via Parallel Branch Hadamard Optimal Transport for Speech Emotion Recognition0
MELT: Towards Automated Multimodal Emotion Data Annotation by Leveraging LLM Embedded KnowledgeCode0
Can Emotion Fool Anti-spoofing?0
EmoSphere-SER: Enhancing Speech Emotion Recognition Through Spherical Representation with Auxiliary ClassificationCode2
Improving Speech Emotion Recognition Through Cross Modal Attention Alignment and Balanced Stacking ModelCode0
ABHINAYA -- A System for Speech Emotion Recognition In Naturalistic Conditions ChallengeCode0
CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-trainingCode11
Meta-PerSER: Few-Shot Listener Personalized Speech Emotion Recognition via Meta-learning0
Mitigating Subgroup Disparities in Multi-Label Speech Emotion Recognition: A Pseudo-Labeling and Unsupervised Learning Approach0
CAMEO: Collection of Multilingual Emotional Speech Corpora0
Empirical Analysis of Asynchronous Federated Learning on Heterogeneous Devices: Efficiency, Fairness, and Privacy Trade-offs0
BERSting at the Screams: A Benchmark for Distanced, Emotional and Shouted Speech RecognitionCode0
Large Language Models Meet Contrastive Learning: Zero-Shot Emotion Recognition Across LanguagesCode0
Deep Learning for Speech Emotion Recognition: A CNN Approach Utilizing Mel Spectrograms0
Coverage-Guaranteed Speech Emotion Recognition via Calibrated Uncertainty-Adaptive Prediction Sets0
Heterogeneous bimodal attention fusion for speech emotion recognition0
Bimodal Connection Attention Fusion for Speech Emotion Recognition0
Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of ThoughtCode1
SigWavNet: Learning Multiresolution Signal Wavelet Network for Speech Emotion RecognitionCode1
OSUM: Advancing Open Speech Understanding Models with Limited Resources in AcademiaCode3
EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network0
EmoFormer: A Text-Independent Speech Emotion Recognition using a Hybrid Transformer-CNN model0
Representation Learning with Parameterised Quantum Circuits for Advancing Speech Emotion Recognition0
Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition0
learning discriminative features from spectrograms using center loss for speech emotion recognition0
Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition0
Metadata-Enhanced Speech Emotion Recognition: Augmented Residual Integration and Co-Attention in Two-Stage Fine-Tuning0
Mouth Articulation-Based Anchoring for Improved Cross-Corpus Speech Emotion Recognition0
Enhanced Speech Emotion Recognition with Efficient Channel Attention Guided Deep CNN-BiLSTM Framework0
Emotional Vietnamese Speech-Based Depression Diagnosis Using Dynamic Attention MechanismCode0
WavFusion: Towards wav2vec 2.0 Multimodal Speech Emotion Recognition0
A Cross-Corpus Speech Emotion Recognition Method Based on Supervised Contrastive Learning0
Once More, With Feeling: Measuring Emotion of Acting Performances in Contemporary American Film0
Re-Parameterization of Lightweight Transformer for On-Device Speech Emotion Recognition0
Improvement and Implementation of a Speech Emotion Recognition Model Based on Dual-Layer LSTM0
Multi-modal Speech Emotion Recognition via Feature Distribution Adaptation NetworkCode0
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified