SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 351400 of 431 papers

TitleStatusHype
CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate0
CopyPaste: An Augmentation Method for Speech Emotion Recognition0
CO-VADA: A Confidence-Oriented Voice Augmentation Debiasing Approach for Fair Speech Emotion Recognition0
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages0
Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers0
Cross-lingual and Multilingual Speech Emotion Recognition on English and French0
Cross Lingual Cross Corpus Speech Emotion Recognition0
CTA-RNN: Channel and Temporal-wise Attention RNN Leveraging Pre-trained ASR Embeddings for Speech Emotion Recognition0
Curriculum Learning for Speech Emotion Recognition from Crowdsourced Labels0
Improving Speech Emotion Recognition with Unsupervised Speaking Style Transfer0
Deep Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition0
Deep Learning for Speech Emotion Recognition: A CNN Approach Utilizing Mel Spectrograms0
deep learning of segment-level feature representation for speech emotion recognition in conversations0
Deep Residual Local Feature Learning for Speech Emotion Recognition0
Deep scattering network for speech emotion recognition0
Describe Where You Are: Improving Noise-Robustness for Speech Emotion Recognition with Text Description of the Environment0
Describing emotions with acoustic property prompts for speech emotion recognition0
Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP0
Developing a High-performance Framework for Speech Emotion Recognition in Naturalistic Conditions Challenge for Emotional Attribute Prediction0
Disentangling Prosody Representations with Unsupervised Speech Reconstruction0
Domain Adapting Deep Reinforcement Learning for Real-world Speech Emotion Recognition0
Domain Adversarial for Acoustic Emotion Recognition0
Double Multi-Head Attention Multimodal System for Odyssey 2024 Speech Emotion Recognition Challenge0
DSNet: Disentangled Siamese Network with Neutral Calibration for Speech Emotion Recognition0
Dynamic Layer Customization for Noise Robust Speech Emotion Recognition in Heterogeneous Condition Training0
標記對於類神經語音情緒辨識系統辨識效果之影響(Effects of Label in Neural Speech Emotion Recognition System)[In Chinese]0
ED-TTS: Multi-Scale Emotion Modeling using Cross-Domain Emotion Diarization for Emotional Speech Synthesis0
Effect of different splitting criteria on the performance of speech emotion recognition0
Emo-bias: A Large Scale Evaluation of Social Bias on Speech Emotion Recognition0
EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations0
EmoDiarize: Speaker Diarization and Emotion Identification from Speech Signals using Convolutional Neural Networks0
EmoFormer: A Text-Independent Speech Emotion Recognition using a Hybrid Transformer-CNN model0
EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network0
Emotion controllable speech synthesis using emotion-unlabeled dataset with the assistance of cross-domain speech emotion recognition0
EmotionNAS: Two-stream Neural Architecture Search for Speech Emotion Recognition0
Emotion Recognition In Persian Speech Using Deep Neural Networks0
Emotion Recognition in Speech using Cross-Modal Transfer in the Wild0
EMOVO Corpus: an Italian Emotional Speech Database0
Empirical Analysis of Asynchronous Federated Learning on Heterogeneous Devices: Efficiency, Fairness, and Privacy Trade-offs0
Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition0
Empirical Interpretation of the Relationship Between Speech Acoustic Context and Emotion Recognition0
End-to-End Continuous Speech Emotion Recognition in Real-life Customer Service Call Center Conversations0
End-to-End Speech Emotion Recognition: Challenges of Real-Life Emergency Call Centers Data Recordings0
End-to-end transfer learning for speaker-independent cross-language and cross-corpus speech emotion recognition0
Enhanced Speech Emotion Recognition with Efficient Channel Attention Guided Deep CNN-BiLSTM Framework0
Enhancing Segment-Based Speech Emotion Recognition by Deep Self-Learning0
Enhancing Speech Emotion Recognition through Segmental Average Pooling of Self-Supervised Learning Features0
Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 20250
Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label Regression Emotion Share from Speech0
Evaluating raw waveforms with deep learning frameworks for speech emotion recognition0
Show:102550
← PrevPage 8 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified