SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 351400 of 431 papers

TitleStatusHype
Transfer Learning for Improving Speech Emotion Classification AccuracyCode0
Evaluating Gammatone Frequency Cepstral Coefficients with Neural Networks for Emotion Recognition from SpeechCode0
MEDUSA: A Multimodal Deep Fusion Multi-Stage Training Framework for Speech Emotion Recognition in Naturalistic ConditionsCode0
End-to-End Label Uncertainty Modeling in Speech Emotion Recognition using Bayesian Neural Networks and Label Distribution LearningCode0
ExHuBERT: Enhancing HuBERT Through Block Extension and Fine-Tuning on 37 Emotion DatasetsCode0
Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic FeaturesCode0
MELT: Towards Automated Multimodal Emotion Data Annotation by Leveraging LLM Embedded KnowledgeCode0
Exploring Multilingual Unseen Speaker Emotion Recognition: Leveraging Co-Attention Cues in Multitask LearningCode0
Unlocking the Emotional States of High-Risk Suicide Callers through Speech AnalysisCode0
End-To-End Label Uncertainty Modeling for Speech-based Arousal Recognition Using Bayesian Neural NetworksCode0
A novel policy for pre-trained Deep Reinforcement Learning for Speech Emotion RecognitionCode0
Multi-modal Speech Emotion Recognition via Feature Distribution Adaptation NetworkCode0
Cross-Lingual Speech Emotion Recognition: Humans vs. Self-Supervised ModelsCode0
Filter-based multi-task cross-corpus feature learning for speech emotion recognitionCode0
Fine-grained Speech Sentiment Analysis in Chinese Psychological Support Hotlines Based on Large-scale Pre-trained ModelCode0
Fixed-MAML for Few Shot Classification in Multilingual Speech Emotion RecognitionCode0
Attentive Modality Hopping Mechanism for Speech Emotion RecognitionCode0
Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion RecognitionCode0
Emotional Vietnamese Speech-Based Depression Diagnosis Using Dynamic Attention MechanismCode0
Attention Based Fully Convolutional Network for Speech Emotion RecognitionCode0
TBDM-Net: Bidirectional Dense Networks with Gender Information for Speech Emotion RecognitionCode0
Attention-Augmented End-to-End Multi-Task Learning for Emotion Prediction from SpeechCode0
Crossmodal ASR Error Correction with Discrete Speech UnitsCode0
Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion TechniquesCode0
Efficient Arabic emotion recognition using deep neural networksCode0
A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition ModelsCode0
An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken DialogsCode0
Pretrained audio neural networks for Speech emotion recognition in PortugueseCode0
CochCeps-Augment: A Novel Self-Supervised Contrastive Learning Using Cochlear Cepstrum-based Masking for Speech Emotion RecognitionCode0
Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion RecognitionCode0
An Improved StarGAN for Emotional Voice Conversion: Enhancing Voice Quality and Data AugmentationCode0
BSC-UPC at EmoSPeech-IberLEF2024: Attention Pooling for Emotion RecognitionCode0
Unsupervised Cross-Lingual Speech Emotion Recognition Using Pseudo MultilabelCode0
Multimodal Speech Emotion Recognition and Ambiguity ResolutionCode0
Multimodal Speech Emotion Recognition Using Audio and TextCode0
Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in ConversationCode0
Improving Speech Emotion Recognition in Under-Resourced Languages via Speech-to-Speech Translation with Bootstrapping Data SelectionCode0
A Change of Heart: Improving Speech Emotion Recognition through Speech-to-Text Modality ConversionCode0
Improving Speech Emotion Recognition Through Cross Modal Attention Alignment and Balanced Stacking ModelCode0
A speech corpus of Quechua Collao for automatic dimensional emotion recognitionCode0
An Extended Variational Mode Decomposition Algorithm Developed Speech Emotion Recognition PerformanceCode0
Integrating Recurrence Dynamics for Speech Emotion RecognitionCode0
INTERSPEECH 2009 Emotion Challenge Revisited: Benchmarking 15 Years of Progress in Speech Emotion RecognitionCode0
Are you sure? Analysing Uncertainty Quantification Approaches for Real-world Speech Emotion RecognitionCode0
A Speech Representation Anonymization Framework via Selective Noise PerturbationCode0
Analysis of Self-Supervised Learning and Dimensionality Reduction Methods in Clustering-Based Active Learning for Speech Emotion RecognitionCode0
Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion RecognitionCode0
Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion RecognitionCode0
A multimodal dynamical variational autoencoder for audiovisual speech representation learningCode0
nEMO: Dataset of Emotional Speech in PolishCode0
Show:102550
← PrevPage 8 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified