SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 201250 of 431 papers

TitleStatusHype
Prompting Audios Using Acoustic Properties For Emotion Representation0
End-to-End Continuous Speech Emotion Recognition in Real-life Customer Service Call Center Conversations0
Active Learning Based Fine-Tuning Framework for Speech Emotion Recognition0
Unsupervised Representations Improve Supervised Learning in Speech Emotion Recognition0
The Broad Impact of Feature Imitation: Neural Enhancements Across Financial, Speech, and Physiological Domains0
Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label Regression Emotion Share from Speech0
Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition0
Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations0
LanSER: Language-Model Supported Speech Emotion Recognition0
Personalized Adaptation with Pre-trained Speech Encoders for Continuous Emotion Recognition0
MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling0
Noise robust speech emotion recognition with signal-to-noise ratio adapting speech enhancement0
Supervised Contrastive Learning with Nearest Neighbor Search for Speech Emotion Recognition0
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations0
Decoding Emotions: A comprehensive Multilingual Study of Speech Models for Speech Emotion RecognitionCode0
MSAC: Multiple Speech Attribute Control Method for Reliable Speech Emotion Recognition0
"We care": Improving Code Mixed Speech Emotion Recognition in Customer-Care Conversations0
Capturing Spectral and Long-term Contextual Information for Speech Emotion Recognition Using Deep Learning Techniques0
A Change of Heart: Improving Speech Emotion Recognition through Speech-to-Text Modality ConversionCode0
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages0
Evaluating raw waveforms with deep learning frameworks for speech emotion recognition0
Empirical Interpretation of the Relationship Between Speech Acoustic Context and Emotion Recognition0
Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers0
GEmo-CLAP: Gender-Attribute-Enhanced Contrastive Language-Audio Pretraining for Accurate Speech Emotion Recognition0
Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus0
MFSN: Multi-perspective Fusion Search Network For Pre-training Knowledge in Speech Emotion Recognition0
Learning Emotional Representations from Imbalanced Speech Data for Speech Emotion Recognition and Emotional Text-to-Speech0
Leveraging Semantic Information for Efficient Self-Supervised Emotion Recognition with Audio-Textual Distilled Models0
Transforming the Embeddings: A Lightweight Technique for Speech Emotion Recognition Tasks0
Transfer Learning for Personality Perception via Speech Emotion Recognition0
ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition0
On the Efficacy and Noise-Robustness of Jointly Learned Speech Emotion and Automatic Speech Recognition0
Versatile audio-visual learning for emotion recognition0
Learning Robust Self-attention Features for Speech Emotion Recognition with Label-adaptive MixupCode0
A vector quantized masked autoencoder for audiovisual speech emotion recognition0
A multimodal dynamical variational autoencoder for audiovisual speech representation learningCode0
A Comparative Study of Pre-trained Speech and Audio Embeddings for Speech Emotion Recognition0
An Empirical Study and Improvement for Speech Emotion Recognition0
Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP0
CNN-n-GRU: end-to-end speech emotion recognition from raw waveform signal using CNNs and gated recurrent unit networks0
CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate0
A low latency attention module for streaming self-supervised speech representation learningCode0
Gaussian-smoothed Imbalance Data Improves Speech Emotion Recognition0
Deep Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition0
Audio Representation Learning by Distilling Video as Privileged Information0
deep learning of segment-level feature representation for speech emotion recognition in conversations0
Modulation spectral features for speech emotion recognition using deep neural networks0
LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks0
A speech corpus of Quechua Collao for automatic dimensional emotion recognitionCode0
Leveraging Pre-Trained Acoustic Feature Extractor For Affective Vocal Bursts TasksCode0
Show:102550
← PrevPage 5 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified