SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 301350 of 431 papers

TitleStatusHype
SERAB: A multi-lingual benchmark for speech emotion recognitionCode1
Speech Emotion Recognition Based on CNN+LSTM Model0
BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition0
Hybrid Data Augmentation and Deep Attention-based Dilated Convolutional-Recurrent Neural Networks for Speech Emotion Recognition0
FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition0
Beyond Isolated Utterances: Conversational Emotion Recognition0
DeepEMO: Deep Learning for Speech Emotion RecognitionCode0
Accounting for Variations in Speech Emotion Recognition with Nonparametric Hierarchical Neural Network0
Speech Emotion Recognition with Multi-Task LearningCode1
Unsupervised Cross-Lingual Speech Emotion Recognition Using Pseudo MultilabelCode0
Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation0
The Role of Phonetic Units in Speech Emotion Recognition0
An Improved StarGAN for Emotional Voice Conversion: Enhancing Voice Quality and Data AugmentationCode0
Expressive Voice Conversion: A Joint Framework for Speaker Identity and Emotional Style Transfer0
Automatic Analysis of the Emotional Content of Speech in Daylong Child-Centered Recordings from a Neonatal Intensive Care Unit0
Efficient Speech Emotion Recognition Using Multi-Scale CNN and AttentionCode1
An Attribute-Aligned Strategy for Learning Speech Representation0
Deep scattering network for speech emotion recognition0
Towards Interpretable and Transferable Speech Emotion Recognition: Latent Representation Based Analysis of Features, Methods and Corpora0
On the Impact of Word Error Rate on Acoustic-Linguistic Speech Emotion Recognition: An Update for the Deep Learning Era0
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems0
Speaker Attentive Speech Emotion Recognition0
Unsupervised low-rank representations for speech emotion recognition0
Emotion Recognition from Speech Using Wav2vec 2.0 EmbeddingsCode1
AST: Audio Spectrogram TransformerCode2
Reinforcement Learning for Emotional Text-to-Speech Synthesis with Improved Emotion Discriminability0
Enhancing Segment-Based Speech Emotion Recognition by Deep Self-Learning0
Self-paced ensemble learning for speech and audio classification0
EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion RecognitionCode1
Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion RecognitionCode1
Investigations on Audiovisual Emotion Recognition in Noisy Conditions0
Contrastive Unsupervised Learning for Speech Emotion Recognition0
Non-linear frequency warping using constant-Q transformation for speech emotion recognition0
Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation0
LSSED: a large-scale dataset and benchmark for speech emotion recognitionCode1
Fixed-MAML for Few Shot Classification in Multilingual Speech Emotion RecognitionCode0
A novel policy for pre-trained Deep Reinforcement Learning for Speech Emotion RecognitionCode0
Unsupervised Cross-Lingual Speech Emotion Recognition Using DomainAdversarial Neural Network0
Multi-Classifier Interactive Learning for Ambiguous Speech Emotion Recognition0
Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition0
Deep Residual Local Feature Learning for Speech Emotion Recognition0
On the use of Self-supervised Pre-trained Acoustic and Linguistic Features for Continuous Speech Emotion Recognition0
Recognizing More Emotions with Less Data Using Self-supervised Transfer Learning0
Efficient Arabic emotion recognition using deep neural networksCode0
Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition0
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition0
Seen and Unseen emotional style transfer for voice conversion with a new emotional speech datasetCode1
CopyPaste: An Augmentation Method for Speech Emotion Recognition0
Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation LearningCode1
Emotion controllable speech synthesis using emotion-unlabeled dataset with the assistance of cross-domain speech emotion recognition0
Show:102550
← PrevPage 7 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified