SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 19011925 of 2041 papers

TitleStatusHype
FEALLM: Advancing Facial Emotion Analysis in Multimodal Large Language Models with Emotional Synergy and ReasoningCode0
Transfer Learning for Improving Speech Emotion Classification AccuracyCode0
Context-Aware Emotion Recognition NetworksCode0
nEMO: Dataset of Emotional Speech in PolishCode0
EMOVOME: A Dataset for Emotion Recognition in Spontaneous Real-Life SpeechCode0
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosCode0
Do Stochastic Parrots have Feelings Too? Improving Neural Detection of Synthetic Text via Emotion RecognitionCode0
Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion RecognitionCode0
Learning Robust Self-attention Features for Speech Emotion Recognition with Label-adaptive MixupCode0
M-MELD: A Multilingual Multi-Party Dataset for Emotion Recognition in ConversationsCode0
Learning Speech Emotion Representations in the Quaternion DomainCode0
Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear DevicesCode0
Transformer based neural networks for emotion recognition in conversationsCode0
Where is Your Evidence: Improving Fact-checking by Justification ModelingCode0
NL-FIIT at IEST-2018: Emotion Recognition utilizing Neural Networks and Multi-level PreprocessingCode0
Leaving Some Facial Features BehindCode0
FATRER: Full-Attention Topic Regularizer for Accurate and Robust Conversational Emotion RecognitionCode0
Leveraged Mel spectrograms using Harmonic and Percussive Components in Speech Emotion RecognitionCode0
Leveraging Content and Acoustic Representations for Speech Emotion RecognitionCode0
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled SamplesCode0
Speech Emotion Recognition Using Multi-hop Attention MechanismCode0
Facial Expressions Recognition System Using FPGA-Based Convolutional Neural NetworkCode0
Speech Emotion Recognition Using Speech Feature and Word EmbeddingCode0
Context and System Fusion in Post-ASR Emotion Recognition with Large Language ModelsCode0
Leveraging LLM Embeddings for Cross Dataset Label Alignment and Zero Shot Music Emotion PredictionCode0
Show:102550
← PrevPage 77 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified