SOTAVerified

Speech Emotion Recognition

Speech Emotion Recognition is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.

For multimodal emotion recognition, please upload your result to Multimodal Emotion Recognition on IEMOCAP

Papers

Showing 201250 of 431 papers

TitleStatusHype
Persian Speech Emotion Recognition by Fine-Tuning Transformers0
Personalized Adaptation with Pre-trained Speech Encoders for Continuous Emotion Recognition0
Personalized Speech Emotion Recognition in Human-Robot Interaction using Vision Transformers0
Pitch-Synchronous Single Frequency Filtering Spectrogram for Speech Emotion Recognition0
Privacy against Real-Time Speech Emotion Detection via Acoustic Adversarial Evasion of Machine Learning0
Probing Speech Emotion Recognition Transformers for Linguistic Knowledge0
Prompting Audios Using Acoustic Properties For Emotion Representation0
Real-time Speech Emotion Recognition Based on Syllable-Level Feature Extraction0
Recognizing More Emotions with Less Data Using Self-supervised Transfer Learning0
Reinforcement Learning for Emotional Text-to-Speech Synthesis with Improved Emotion Discriminability0
Re-Parameterization of Lightweight Transformer for On-Device Speech Emotion Recognition0
Representation learning through cross-modal conditional teacher-student training for speech emotion recognition0
Representation Learning with Graph Neural Networks for Speech Emotion Recognition0
Research on several key technologies in practical speech emotion recognition0
Revealing Emotional Clusters in Speaker Embeddings: A Contrastive Learning Strategy for Speech Emotion Recognition0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
Coverage-Guaranteed Speech Emotion Recognition via Calibrated Uncertainty-Adaptive Prediction Sets0
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition0
Searching for Effective Preprocessing Method and CNN-based Architecture with Efficient Channel Attention on Speech Emotion Recognition0
SEGAA: A Unified Approach to Predicting Age, Gender, and Emotion in Speech0
Self-paced ensemble learning for speech and audio classification0
Self-Supervised Attention Networks and Uncertainty Loss Weighting for Multi-Task Emotion Recognition on Vocal Bursts0
Semi-supervised cross-lingual speech emotion recognition0
Sentiment-Aware Automatic Speech Recognition pre-training for enhanced Speech Emotion Recognition0
Sentiment recognition of Italian elderly through domain adaptation on cross-corpus speech dataset0
SeQuiFi: Mitigating Catastrophic Forgetting in Speech Emotion Recognition with Sequential Class-Finetuning0
SER_AMPEL: a multi-source dataset for speech emotion recognition of Italian older adults0
Shallow over Deep Neural Networks: A empirical analysis for human emotion classification using audio data0
Source Tracing of Synthetic Speech Systems Through Paralinguistic Pre-Trained Representations0
Speaker Attentive Speech Emotion Recognition0
Speaker-invariant Affective Representation Learning via Adversarial Training0
Speaker Normalization for Self-supervised Speech Emotion Recognition0
Speech and Text-Based Emotion Recognizer0
Speech Emotion Recognition Based on CNN+LSTM Model0
Speech Emotion Recognition Based on Multi-feature and Multi-lingual Fusion0
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features0
Speech Emotion Recognition Considering Local Dynamic Features0
Breaking Resource Barriers in Speech Emotion Recognition via Data Distillation0
Speech Emotion Recognition Using CNN and Its Use Case in Digital Healthcare0
Speech Emotion Recognition Using Deep Sparse Auto-Encoder Extreme Learning Machine with a New Weighting Scheme and Spectro-Temporal Features Along with Classical Feature Selection and A New Quantum-Inspired Dimension Reduction Method0
Speech Emotion Recognition Using Quaternion Convolutional Neural Networks0
Speech Emotion Recognition using Self-Supervised Features0
Speech Emotion Recognition using Supervised Deep Recurrent System for Mental Health Monitoring0
Speech Emotion Recognition using Support Vector Machine0
Speech Emotion Recognition via an Attentive Time-Frequency Neural Network0
Speech Emotion Recognition via Contrastive Loss under Siamese Networks0
結合非線性動態特徵之語音情緒辨識(Speech Emotion Recognition via Nonlinear Dynamical Features)[In Chinese]0
Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations0
Speech Emotion Recognition with Dual-Sequence LSTM Architecture0
Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation0
Show:102550
← PrevPage 5 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Vertically long patch ViTAccuracy94.07Unverified
2ConformerXL-PAccuracy88.2Unverified
3CoordViTAccuracy82.96Unverified
4SepTr + LeRaCAccuracy70.95Unverified
5SepTrAccuracy70.47Unverified
6ResNet-18 + SPELAccuracy68.12Unverified
7ViTAccuracy67.81Unverified
8ResNet-18 + PyNADAAccuracy65.15Unverified
9GRUAccuracy55.01Unverified
#ModelMetricClaimedVerifiedStatus
1SER with MTLUA CV0.78Unverified
2emoDARTSUA CV0.77Unverified
3LSTM+FCWA0.76Unverified
4TAPWA CV0.74Unverified
5SYSCOMB: BLSTMATT with CSA (session5)UA0.74Unverified
6Partially Fine-tuned HuBERT LargeWA CV0.73Unverified
7CNN - DARTSUA0.7Unverified
8CNN+LSTMUA0.65Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy84.1Unverified
2CNN-X (Shallow CNN)Accuracy82.99Unverified
3xlsr-Wav2Vec2.0(FineTuning)Accuracy81.82Unverified
4CNN-14 (Fine-Tuning)Accuracy76.58Unverified
5AlexNet (FineTuning)Accuracy61.67Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.76Unverified
2wavlmCCC0.75Unverified
3w2v2-L-robust-12CCC0.75Unverified
4preCPCCCC0.71Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.67Unverified
3w2v2-L-robust-12CCC0.66Unverified
4preCPCCCC0.64Unverified
#ModelMetricClaimedVerifiedStatus
1wav2small-TeacherCCC0.68Unverified
2wavlmCCC0.65Unverified
3w2v2-L-robust-12CCC0.64Unverified
4preCPCCCC0.38Unverified
#ModelMetricClaimedVerifiedStatus
1DAWN-hidden-SVMUnweighted Accuracy (UA)32.1Unverified
2Wav2Small-VAD-SVMUnweighted Accuracy (UA)23.3Unverified
3Speechbrain Wav2Vec2Unweighted Accuracy (UA)20.7Unverified
#ModelMetricClaimedVerifiedStatus
1emotion2vec+baseWeighted Accuracy (WA)79.4Unverified
2emotion2vec+largeWeighted Accuracy (WA)69.5Unverified
3emotion2vecWeighted Accuracy (WA)64.75Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.77Unverified
#ModelMetricClaimedVerifiedStatus
1Dusha baselineMacro F10.54Unverified
#ModelMetricClaimedVerifiedStatus
1VGG-optiVMD1:1 Accuracy96.09Unverified
#ModelMetricClaimedVerifiedStatus
1VQ-MAE-S-12 (Frame) + Query2EmoAccuracy90.2Unverified
#ModelMetricClaimedVerifiedStatus
1PyResNetUnweighted Accuracy (UA)0.43Unverified
#ModelMetricClaimedVerifiedStatus
1emoDARTSUA0.66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTMCCC (Arousal)0.76Unverified
#ModelMetricClaimedVerifiedStatus
1CNN (1D)Unweighted Accuracy65.2Unverified