SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 150 of 431 papers

TitleStatusHype
CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-trainingCode11
OSUM: Advancing Open Speech Understanding Models with Limited Resources in AcademiaCode3
EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and BenchmarkCode3
emotion2vec: Self-Supervised Pre-Training for Speech Emotion RepresentationCode3
Attention Is All You NeedCode3
EmoSphere-SER: Enhancing Speech Emotion Recognition Through Spherical Representation with Auxiliary ClassificationCode2
BLSP-Emo: Towards Empathetic Large Speech-Language ModelsCode2
EMO-SUPERB: An In-depth Look at Speech Emotion RecognitionCode2
LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPTCode2
Dawn of the transformer era in speech emotion recognition: closing the valence gapCode2
AST: Audio Spectrogram TransformerCode2
Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of ThoughtCode1
SigWavNet: Learning Multiresolution Signal Wavelet Network for Speech Emotion RecognitionCode1
SER Evals: In-domain and Out-of-domain Benchmarking for Speech Emotion RecognitionCode1
Odyssey 2024 - Speech Emotion Recognition Challenge: Dataset, Baseline Framework, and ResultsCode1
Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transferCode1
emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion RecognitionCode1
Speech Emotion Recognition Via CNN-Transformer and Multidimensional Attention MechanismCode1
Frame-level emotional state alignment method for speech emotion recognitionCode1
Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio DetectionCode1
Emo-DNA: Emotion Decoupling and Alignment Learning for Cross-Corpus Speech Emotion RecognitionCode1
Vesper: A Compact and Effective Pretrained Model for Speech Emotion RecognitionCode1
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion RecognitionCode1
Speech Emotion Diarization: Which Emotion Appears When?Code1
Enhancing Speech Emotion Recognition Through Differentiable Architecture SearchCode1
A vector quantized masked autoencoder for speech emotion recognitionCode1
DWFormer: Dynamic Window transFormer for Speech Emotion RecognitionCode1
SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech ProcessingCode1
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine Learning Classification MethodologiesCode1
Large Raw Emotional Dataset with Aggregation MechanismCode1
A Persian ASR-based SER: Modification of Sharif Emotional Speech Database and Investigation of Persian Text CorporaCode1
Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech Emotion RecognitionCode1
SPEAKER VGG CCT: Cross-corpus Speech Emotion Recognition with Speaker Embedding and Vision TransformersCode1
GM-TCNet: Gated Multi-scale Temporal Convolutional Network using Emotion Causality for Speech Emotion RecognitionCode1
Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature RepresentationCode1
MMER: Multimodal Multi-task Learning for Speech Emotion RecognitionCode1
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationCode1
SepTr: Separable Transformer for Audio Spectrogram ProcessingCode1
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-LabelingCode1
Privacy-preserving Speech Emotion Recognition through Semi-Supervised Federated LearningCode1
A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS datasetCode1
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning SettingsCode1
Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognitionCode1
Arabic Speech Emotion Recognition Employing Wav2vec2.0 and HuBERT Based on BAVED DatasetCode1
SERAB: A multi-lingual benchmark for speech emotion recognitionCode1
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognitionCode1
Speech Emotion Recognition with Multi-Task LearningCode1
Efficient Speech Emotion Recognition Using Multi-Scale CNN and AttentionCode1
Emotion Recognition from Speech Using Wav2vec 2.0 EmbeddingsCode1
EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion RecognitionCode1
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified