SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 251300 of 431 papers

TitleStatusHype
SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning0
Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition0
STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition0
Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance0
Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition0
Supervised Contrastive Learning with Nearest Neighbor Search for Speech Emotion Recognition0
Support Super-Vector Machines in Automatic Speech Emotion Recognition0
SyntAct: A Synthesized Database of Basic Emotions0
TemporalAugmenter: An Ensemble Recurrent Based Deep Learning Approach for Signal Classification0
Testing Correctness, Fairness, and Robustness of Speech Emotion Recognition Models0
The Broad Impact of Feature Imitation: Neural Enhancements Across Financial, Speech, and Physiological Domains0
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio Benchmarks and Novel Data0
The Role of Phonetic Units in Speech Emotion Recognition0
Toward end-to-end interpretable convolutional neural networks for waveform signals0
Towards adversarial learning of speaker-invariant representation for speech emotion recognition0
Towards Interpretable and Transferable Speech Emotion Recognition: Latent Representation Based Analysis of Features, Methods and Corpora0
Towards Machine Unlearning for Paralinguistic Speech Processing0
Towards Speech Emotion Recognition "in the wild" using Aggregated Corpora and Deep Multi-Task Learning0
Towards Transferable Speech Emotion Representation: On loss functions for cross-lingual latent representations0
Transferable Positive/Negative Speech Emotion Recognition via Class-wise Adversarial Domain Adaptation0
Transfer Learning for Personality Perception via Speech Emotion Recognition0
Transforming the Embeddings: A Lightweight Technique for Speech Emotion Recognition Tasks0
TRNet: Two-level Refinement Network leveraging Speech Enhancement for Noise Robust Speech Emotion Recognition0
Turbo your multi-modal classification with contrastive learning0
Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions0
Unifying the Discrete and Continuous Emotion labels for Speech Emotion Recognition0
Unsupervised Cross-Lingual Speech Emotion Recognition Using DomainAdversarial Neural Network0
Unsupervised low-rank representations for speech emotion recognition0
Unsupervised Personalization of an Emotion Recognition System: The Unique Properties of the Externalization of Valence in Speech0
Unsupervised Representation Learning with Future Observation Prediction for Speech Emotion Recognition0
Unsupervised Representations Improve Supervised Learning in Speech Emotion Recognition0
Usefulness of Emotional Prosody in Neural Machine Translation0
Utilizing Speech Emotion Recognition and Recommender Systems for Negative Emotion Handling in Therapy Chatbots0
Variational Autoencoders for Learning Latent Representations of Speech Emotion: A Preliminary Study0
Versatile audio-visual learning for emotion recognition0
Visually Guided Self Supervised Learning of Speech Representations0
WavFusion: Towards wav2vec 2.0 Multimodal Speech Emotion Recognition0
"We care": Improving Code Mixed Speech Emotion Recognition in Customer-Care Conversations0
What Does it Take to Generalize SER Model Across Datasets? A Comprehensive Benchmark0
1st Place Solution to Odyssey Emotion Recognition Challenge Task1: Tackling Class Imbalance Problem0
Conditioning LLMs with Emotion in Neural Machine Translation0
CAMEO: Collection of Multilingual Emotional Speech Corpora0
EMO-Debias: Benchmarking Gender Debiasing Techniques in Multi-Label Speech Emotion Recognition0
A breakthrough in Speech emotion recognition using Deep Retinal Convolution Neural Networks0
Accounting for Variations in Speech Emotion Recognition with Nonparametric Hierarchical Neural Network0
A Comparative Study of Pre-trained Speech and Audio Embeddings for Speech Emotion Recognition0
Acoustic-to-articulatory Speech Inversion with Multi-task Learning0
A Cross-Corpus Speech Emotion Recognition Method Based on Supervised Contrastive Learning0
A cross-corpus study on speech emotion recognition0
A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition0
Show:102550
← PrevPage 6 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified