SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 101150 of 431 papers

TitleStatusHype
WavFusion: Towards wav2vec 2.0 Multimodal Speech Emotion Recognition0
A Cross-Corpus Speech Emotion Recognition Method Based on Supervised Contrastive Learning0
Once More, With Feeling: Measuring Emotion of Acting Performances in Contemporary American Film0
Improvement and Implementation of a Speech Emotion Recognition Model Based on Dual-Layer LSTM0
Re-Parameterization of Lightweight Transformer for On-Device Speech Emotion Recognition0
Multi-modal Speech Emotion Recognition via Feature Distribution Adaptation NetworkCode0
Improving Speech-based Emotion Recognition with Contextual Utterance Analysis and LLMs0
A Survey on Speech Large Language Models0
Investigating Effective Speaker Property Privacy Protection in Federated Learning for Speech Emotion Recognition0
Multi-View Multi-Task Modeling with Speech Foundation Models for Speech Forensic Tasks0
Enhancing Speech Emotion Recognition through Segmental Average Pooling of Self-Supervised Learning Features0
SeQuiFi: Mitigating Catastrophic Forgetting in Speech Emotion Recognition with Sequential Class-Finetuning0
Can We Estimate Purchase Intention Based on Zero-shot Speech Emotion Recognition?0
Audio Explanation Synthesis with Generative Foundation ModelsCode0
A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition0
Multi-Scale Temporal Transformer For Speech Emotion Recognition0
Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
Cross-Lingual Speech Emotion Recognition: Humans vs. Self-Supervised ModelsCode0
Improving Speech Emotion Recognition in Under-Resourced Languages via Speech-to-Speech Translation with Bootstrapping Data SelectionCode0
Personalized Speech Emotion Recognition in Human-Robot Interaction using Vision Transformers0
TBDM-Net: Bidirectional Dense Networks with Gender Information for Speech Emotion RecognitionCode0
Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance0
Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic FeaturesCode0
Turbo your multi-modal classification with contrastive learning0
Leveraging Content and Acoustic Representations for Speech Emotion RecognitionCode0
Consensus-based Distributed Quantum Kernel Learning for Speech Recognition0
Searching for Effective Preprocessing Method and CNN-based Architecture with Efficient Channel Attention on Speech Emotion Recognition0
The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional VariabilityCode0
Audio Enhancement for Computer Audition -- An Iterative Training Paradigm Using Sample Importance0
Conditioning LLMs with Emotion in Neural Machine Translation0
Describe Where You Are: Improving Noise-Robustness for Speech Emotion Recognition with Text Description of the Environment0
EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations0
PCQ: Emotion Recognition in Speech via Progressive Channel Querying0
BSC-UPC at EmoSPeech-IberLEF2024: Attention Pooling for Emotion RecognitionCode0
MSP-Podcast SER Challenge 2024: L'antenne du Ventoux Multimodal Self-Supervised Learning for Speech Emotion Recognition0
A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition0
Are you sure? Analysing Uncertainty Quantification Approaches for Real-world Speech Emotion RecognitionCode0
Breaking Resource Barriers in Speech Emotion Recognition via Data Distillation0
Speech Emotion Recognition Using CNN and Its Use Case in Digital Healthcare0
Double Multi-Head Attention Multimodal System for Odyssey 2024 Speech Emotion Recognition Challenge0
What Does it Take to Generalize SER Model Across Datasets? A Comprehensive Benchmark0
Exploring Multilingual Unseen Speaker Emotion Recognition: Leveraging Co-Attention Cues in Multitask LearningCode0
Exploring Self-Supervised Multi-view Contrastive Learning for Speech Emotion Recognition with Limited Annotations0
Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion TechniquesCode0
ExHuBERT: Enhancing HuBERT Through Block Extension and Fine-Tuning on 37 Emotion DatasetsCode0
INTERSPEECH 2009 Emotion Challenge Revisited: Benchmarking 15 Years of Progress in Speech Emotion RecognitionCode0
Enrolment-based personalisation for improving individual-level fairness in speech emotion recognitionCode0
Emo-bias: A Large Scale Evaluation of Social Bias on Speech Emotion Recognition0
Multi-Microphone Speech Emotion Recognition using the Hierarchical Token-semantic Audio Transformer Architecture0
Show:102550
← PrevPage 3 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified