SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 251300 of 431 papers

TitleStatusHype
Analysis of Self-Supervised Learning and Dimensionality Reduction Methods in Clustering-Based Active Learning for Speech Emotion RecognitionCode0
AHD ConvNet for Speech Emotion Classification0
SyntAct: A Synthesized Database of Basic Emotions0
Acoustic-to-articulatory Speech Inversion with Multi-task Learning0
Learning Rate CurriculumCode0
Emotion Recognition In Persian Speech Using Deep Neural Networks0
Real-time Speech Emotion Recognition Based on Syllable-Level Feature Extraction0
Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature RepresentationCode1
Learning Speech Emotion Representations in the Quaternion DomainCode0
Probing Speech Emotion Recognition Transformers for Linguistic Knowledge0
Neural Architecture Search for Speech Emotion Recognition0
MMER: Multimodal Multi-task Learning for Speech Emotion RecognitionCode1
CTA-RNN: Channel and Temporal-wise Attention RNN Leveraging Pre-trained ASR Embeddings for Speech Emotion Recognition0
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationCode1
Continuous Metric Learning For Transferable Speech Emotion Recognition and Embedding Across Low-resource Languages0
Towards Transferable Speech Emotion Representation: On loss functions for cross-lingual latent representations0
A Dataset for Speech Emotion Recognition in Greek Theatrical PlaysCode0
A Speech Representation Anonymization Framework via Selective Noise PerturbationCode0
EmotionNAS: Two-stream Neural Architecture Search for Speech Emotion Recognition0
SepTr: Separable Transformer for Audio Spectrogram ProcessingCode1
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-LabelingCode1
Dawn of the transformer era in speech emotion recognition: closing the valence gapCode2
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition0
Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition0
Speech Emotion Recognition using Self-Supervised Features0
Privacy-preserving Speech Emotion Recognition through Semi-Supervised Federated LearningCode1
Speaker Normalization for Self-supervised Speech Emotion Recognition0
Self-supervised Graphs for Audio Representation Learning with Limited Labeled DataCode0
Sentiment-Aware Automatic Speech Recognition pre-training for enhanced Speech Emotion Recognition0
Unsupervised Personalization of an Emotion Recognition System: The Unique Properties of the Externalization of Valence in Speech0
A study on cross-corpus speech emotion recognition and data augmentation0
A New Amharic Speech Emotion Dataset and Classification Benchmark0
A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS datasetCode1
Novel Dual-Channel Long Short-Term Memory Compressed Capsule Networks for Emotion Recognition0
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning SettingsCode1
Classifying Emotional Utterances by Employing Multi-modal Speech Emotion Recognition0
Representation learning through cross-modal conditional teacher-student training for speech emotion recognition0
A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features0
Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning0
Biologically inspired speech emotion recognition0
Speech Emotion Recognition Using Deep Sparse Auto-Encoder Extreme Learning Machine with a New Weighting Scheme and Spectro-Temporal Features Along with Classical Feature Selection and A New Quantum-Inspired Dimension Reduction Method0
A Fine-tuned Wav2vec 2.0/HuBERT Benchmark For Speech Emotion Recognition, Speaker Verification and Spoken Language Understanding0
Speech Emotion Recognition Using Quaternion Convolutional Neural Networks0
Fusing ASR Outputs in Joint Training for Speech Emotion Recognition0
End-to-End Speech Emotion Recognition: Challenges of Real-Life Emergency Call Centers Data Recordings0
Multistage linguistic conditioning of convolutional layers for speech emotion recognition0
Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognitionCode1
Arabic Speech Emotion Recognition Employing Wav2vec2.0 and HuBERT Based on BAVED DatasetCode1
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognitionCode1
End-To-End Label Uncertainty Modeling for Speech-based Arousal Recognition Using Bayesian Neural NetworksCode0
Show:102550
← PrevPage 6 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified