SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 51100 of 431 papers

TitleStatusHype
Arabic Speech Emotion Recognition Employing Wav2vec2.0 and HuBERT Based on BAVED DatasetCode1
Seen and Unseen emotional style transfer for voice conversion with a new emotional speech datasetCode1
Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transferCode1
A Persian ASR-based SER: Modification of Sharif Emotional Speech Database and Investigation of Persian Text CorporaCode1
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine Learning Classification MethodologiesCode1
Efficient Speech Emotion Recognition Using Multi-Scale CNN and AttentionCode1
A vector quantized masked autoencoder for speech emotion recognitionCode1
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning SettingsCode1
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion RecognitionCode1
Emotion Recognition from Speech Using Wav2vec 2.0 EmbeddingsCode1
Privacy-preserving Speech Emotion Recognition through Semi-Supervised Federated LearningCode1
Automated Assessment of Encouragement and Warmth in Classrooms Leveraging Multimodal Emotional Features and ChatGPT0
Audio Representation Learning by Distilling Video as Privileged Information0
An analysis of large speech models-based representations for speech emotion recognition0
A cross-corpus study on speech emotion recognition0
Audio Enhancement for Computer Audition -- An Iterative Training Paradigm Using Sample Importance0
Disentangling Prosody Representations with Unsupervised Speech Reconstruction0
Domain Adapting Deep Reinforcement Learning for Real-world Speech Emotion Recognition0
Analysis of constant-Q filterbank based representations for speech emotion recognition0
A Cross-Corpus Speech Emotion Recognition Method Based on Supervised Contrastive Learning0
Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech0
A Multi-Task, Multi-Modal Approach for Predicting Categorical and Dimensional Emotions0
Developing a High-performance Framework for Speech Emotion Recognition in Naturalistic Conditions Challenge for Emotional Attribute Prediction0
Domain Adversarial for Acoustic Emotion Recognition0
Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition0
Acoustic-to-articulatory Speech Inversion with Multi-task Learning0
Describe Where You Are: Improving Noise-Robustness for Speech Emotion Recognition with Text Description of the Environment0
A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition0
A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition0
Conditioning LLMs with Emotion in Neural Machine Translation0
Describing emotions with acoustic property prompts for speech emotion recognition0
AHD ConvNet for Speech Emotion Classification0
A Survey on Speech Large Language Models0
A Comparative Study of Pre-trained Speech and Audio Embeddings for Speech Emotion Recognition0
A study on cross-corpus speech emotion recognition and data augmentation0
CopyPaste: An Augmentation Method for Speech Emotion Recognition0
A Graph Isomorphism Network with Weighted Multiple Aggregators for Speech Emotion Recognition0
Deep scattering network for speech emotion recognition0
Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP0
Double Multi-Head Attention Multimodal System for Odyssey 2024 Speech Emotion Recognition Challenge0
CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate0
Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition0
ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition0
Converting Anyone's Voice: End-to-End Expressive Voice Conversion with a Conditional Diffusion Model0
Contrastive Unsupervised Learning for Speech Emotion Recognition0
A Fine-tuned Wav2vec 2.0/HuBERT Benchmark For Speech Emotion Recognition, Speaker Verification and Spoken Language Understanding0
CO-VADA: A Confidence-Oriented Voice Augmentation Debiasing Approach for Fair Speech Emotion Recognition0
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages0
Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers0
Deep Learning for Speech Emotion Recognition: A CNN Approach Utilizing Mel Spectrograms0
Show:102550
← PrevPage 2 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified