SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 401431 of 431 papers

TitleStatusHype
Iterative Feature Boosting for Explainable Speech Emotion RecognitionCode0
Domain Specific Wav2vec 2.0 Fine-tuning For The SE&R 2022 ChallengeCode0
EMOVOME: A Dataset for Emotion Recognition in Spontaneous Real-Life SpeechCode0
Knowledge Transfer For On-Device Speech Emotion Recognition with Neural Structured LearningCode0
Label Uncertainty Modeling and Prediction for Speech Emotion Recognition using t-DistributionsCode0
A Dataset for Speech Emotion Recognition in Greek Theatrical PlaysCode0
Large Language Models Meet Contrastive Learning: Zero-Shot Emotion Recognition Across LanguagesCode0
Deep Learning based Emotion Recognition System Using Speech Features and TranscriptionsCode0
DeepEMO: Deep Learning for Speech Emotion RecognitionCode0
The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation SystemsCode0
Learning Alignment for Multimodal Emotion Recognition from SpeechCode0
Speech Emotion Recognition Using Multi-hop Attention MechanismCode0
Non-linear Neurons with Human-like Apical Dendrite ActivationsCode0
Active Learning with Task Adaptation Pre-training for Speech Emotion RecognitionCode0
BERSting at the Screams: A Benchmark for Distanced, Emotional and Shouted Speech RecognitionCode0
Learning Robust Self-attention Features for Speech Emotion Recognition with Label-adaptive MixupCode0
Cross Lingual Speech Emotion Recognition: Urdu vs. Western LanguagesCode0
Learning Speech Emotion Representations in the Quaternion DomainCode0
ABHINAYA -- A System for Speech Emotion Recognition In Naturalistic Conditions ChallengeCode0
Speech Emotion Recognition Using Speech Feature and Word EmbeddingCode0
Learning Rate CurriculumCode0
Leveraged Mel spectrograms using Harmonic and Percussive Components in Speech Emotion RecognitionCode0
Leveraging Content and Acoustic Representations for Speech Emotion RecognitionCode0
On The Differences Between Song and Speech Emotion Recognition: Effect of Feature Sets, Feature Types, and ClassifiersCode0
Leveraging Pre-Trained Acoustic Feature Extractor For Affective Vocal Bursts TasksCode0
The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional VariabilityCode0
Self-supervised Graphs for Audio Representation Learning with Limited Labeled DataCode0
Decoding Emotions: A comprehensive Multilingual Study of Speech Models for Speech Emotion RecognitionCode0
Audio Explanation Synthesis with Generative Foundation ModelsCode0
A low latency attention module for streaming self-supervised speech representation learningCode0
CTL-MTNet: A Novel CapsNet and Transfer Learning-Based Mixed Task Net for the Single-Corpus and Cross-Corpus Speech Emotion RecognitionCode0
Show:102550
← PrevPage 9 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified