SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 251300 of 431 papers

TitleStatusHype
Visually Guided Self Supervised Learning of Speech Representations0
WavFusion: Towards wav2vec 2.0 Multimodal Speech Emotion Recognition0
"We care": Improving Code Mixed Speech Emotion Recognition in Customer-Care Conversations0
Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label Regression Emotion Share from Speech0
Evaluating raw waveforms with deep learning frameworks for speech emotion recognition0
Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus0
Exploring Self-Supervised Multi-view Contrastive Learning for Speech Emotion Recognition with Limited Annotations0
Expressive Voice Conversion: A Joint Framework for Speaker Identity and Emotional Style Transfer0
Feature Selection Enhancement and Feature Space Visualization for Speech-Based Emotion Recognition0
Focal Loss based Residual Convolutional Neural Network for Speech Emotion Recognition0
Forewords0
FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition0
Fusing ASR Outputs in Joint Training for Speech Emotion Recognition0
Gaussian-smoothed Imbalance Data Improves Speech Emotion Recognition0
GEmo-CLAP: Gender-Attribute-Enhanced Contrastive Language-Audio Pretraining for Accurate Speech Emotion Recognition0
GMP-TL: Gender-augmented Multi-scale Pseudo-label Enhanced Transfer Learning for Speech Emotion Recognition0
Heterogeneous bimodal attention fusion for speech emotion recognition0
Are Paralinguistic Representations all that is needed for Speech Emotion Recognition?0
Hybrid Data Augmentation and Deep Attention-based Dilated Convolutional-Recurrent Neural Networks for Speech Emotion Recognition0
HYFuse: Aligning Heterogeneous Speech Pre-Trained Representations in Hyperbolic Space for Speech Emotion Recognition0
"I have vxxx bxx connexxxn!": Facing Packet Loss in Deep Speech Emotion Recognition0
Improved Frame Level Features and SVM Supervectors Approach for the Recogniton of Emotional States from Speech: Application to categorical and dimensional states0
Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation0
Improvement and Implementation of a Speech Emotion Recognition Model Based on Dual-Layer LSTM0
Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation0
Improving Speech-based Emotion Recognition with Contextual Utterance Analysis and LLMs0
Improving Speech Emotion Recognition Through Focus and Calibration Attention Mechanisms0
Improving speech emotion recognition via Transformer-based Predictive Coding through transfer learning0
Integrating Contrastive Learning into a Multitask Transformer Model for Effective Domain Adaptation0
Investigating Effective Speaker Property Privacy Protection in Federated Learning for Speech Emotion Recognition0
Investigating salient representations and label Variance in Dimensional Speech Emotion Analysis0
Investigating the Impact of Word Informativeness on Speech Emotion Recognition0
Investigations on Audiovisual Emotion Recognition in Noisy Conditions0
Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition0
A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features0
Fine-grained Early Frequency Attention for Deep Speaker Representation Learning0
LanSER: Language-Model Supported Speech Emotion Recognition0
Layer-Wise Analysis of Self-Supervised Acoustic Word Embeddings: A Study on Speech Emotion Recognition0
learning discriminative features from spectrograms using center loss for speech emotion recognition0
Learning Discriminative features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition0
Learning Emotional Representations from Imbalanced Speech Data for Speech Emotion Recognition and Emotional Text-to-Speech0
Learning More with Less: Self-Supervised Approaches for Low-Resource Speech Emotion Recognition0
Learning spectro-temporal features with 3D CNNs for speech emotion recognition0
Learning Spontaneity to Improve Emotion Recognition In Speech0
Learning Transferable Features for Speech Emotion Recognition0
Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition0
Leveraging Semantic Information for Efficient Self-Supervised Emotion Recognition with Audio-Textual Distilled Models0
Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition0
LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks0
MATER: Multi-level Acoustic and Textual Emotion Representation for Interpretable Speech Emotion Recognition0
Show:102550
← PrevPage 6 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified