SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 351400 of 431 papers

TitleStatusHype
Deep Residual Local Feature Learning for Speech Emotion Recognition0
On the use of Self-supervised Pre-trained Acoustic and Linguistic Features for Continuous Speech Emotion Recognition0
Recognizing More Emotions with Less Data Using Self-supervised Transfer Learning0
Efficient Arabic emotion recognition using deep neural networksCode0
Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition0
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition0
CopyPaste: An Augmentation Method for Speech Emotion Recognition0
Emotion controllable speech synthesis using emotion-unlabeled dataset with the assistance of cross-domain speech emotion recognition0
Multi-stream Attention-based BLSTM with Feature Segmentation for Speech Emotion Recognition0
Dynamic Layer Customization for Noise Robust Speech Emotion Recognition in Heterogeneous Condition Training0
Multi-Window Data Augmentation Approach for Speech Emotion Recognition0
Optimizing Speech Emotion Recognition using Manta-Ray Based Feature Selection0
Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion RecognitionCode0
Fine-grained Early Frequency Attention for Deep Speaker Representation Learning0
A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition0
Shallow over Deep Neural Networks: A empirical analysis for human emotion classification using audio data0
Meta Transfer Learning for Emotion Recognition0
A Siamese Neural Network with Modified Distance Loss For Transfer Learning in Speech Emotion Recognition0
ConcealNet: An End-to-end Neural Network for Packet Loss Concealment in Deep Speech Emotion Recognition0
"I have vxxx bxx connexxxn!": Facing Packet Loss in Deep Speech Emotion Recognition0
On The Differences Between Song and Speech Emotion Recognition: Effect of Feature Sets, Feature Types, and ClassifiersCode0
Cross Lingual Cross Corpus Speech Emotion Recognition0
Speech Emotion Recognition using Support Vector Machine0
Non-linear Neurons with Human-like Apical Dendrite ActivationsCode0
Speech Emotion Recognition Based on Multi-feature and Multi-lingual Fusion0
Visually Guided Self Supervised Learning of Speech Representations0
Learning Transferable Features for Speech Emotion Recognition0
Bimodal Speech Emotion Recognition Using Pre-Trained Language Models0
Attentive Modality Hopping Mechanism for Speech Emotion RecognitionCode0
Speech Emotion Recognition Using Speech Feature and Word EmbeddingCode0
Speaker-invariant Affective Representation Learning via Adversarial Training0
Unsupervised Representation Learning with Future Observation Prediction for Speech Emotion Recognition0
Speech Emotion Recognition via Contrastive Loss under Siamese Networks0
Speech Emotion Recognition with Dual-Sequence LSTM Architecture0
Learning Alignment for Multimodal Emotion Recognition from SpeechCode0
Pitch-Synchronous Single Frequency Filtering Spectrogram for Speech Emotion Recognition0
Learning Discriminative features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition0
Focal Loss based Residual Convolutional Neural Network for Speech Emotion Recognition0
Deep Learning based Emotion Recognition System Using Speech Features and TranscriptionsCode0
Speech Emotion Recognition Using Multi-hop Attention MechanismCode0
An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken DialogsCode0
Multimodal Speech Emotion Recognition and Ambiguity ResolutionCode0
Attention-Augmented End-to-End Multi-Task Learning for Emotion Prediction from SpeechCode0
Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG)0
Towards adversarial learning of speaker-invariant representation for speech emotion recognition0
Cross Lingual Speech Emotion Recognition: Urdu vs. Western LanguagesCode0
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness0
Improving speech emotion recognition via Transformer-based Predictive Coding through transfer learning0
Integrating Recurrence Dynamics for Speech Emotion RecognitionCode0
Transferable Positive/Negative Speech Emotion Recognition via Class-wise Adversarial Domain Adaptation0
Show:102550
← PrevPage 8 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified