SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 2650 of 2041 papers

TitleStatusHype
Are We There Yet? A Brief Survey of Music Emotion Prediction Datasets, Models and Outstanding ChallengesCode2
BLSP-Emo: Towards Empathetic Large Speech-Language ModelsCode2
MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wildCode2
EMO-SUPERB: An In-depth Look at Speech Emotion RecognitionCode2
EmoBench: Evaluating the Emotional Intelligence of Large Language ModelsCode2
Personalized Large Language ModelsCode2
HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion RecognitionCode2
LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPTCode2
MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised LearningCode2
Towards Interpretable Mental Health Analysis with Large Language ModelsCode2
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion RecognitionCode2
CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AICode2
EmoBank: Studying the Impact of Annotation Perspective and Representation Format on Dimensional Emotion AnalysisCode2
EMOCA: Emotion Driven Monocular Face Capture and AnimationCode2
Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile DevicesCode2
Dawn of the transformer era in speech emotion recognition: closing the valence gapCode2
SSAST: Self-Supervised Audio Spectrogram TransformerCode2
COSMIC: COmmonSense knowledge for eMotion Identification in ConversationsCode2
audino: A Modern Annotation Tool for Audio and SpeechCode2
Exploring Remote Physiological Signal Measurement under Dynamic Lighting Conditions at Night: Dataset, Experiment, and AnalysisCode1
Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution ShiftsCode1
EfficientFER: EfficientNetv2 Based Deep Learning Approach for Facial Expression RecognitionCode1
Evaluation in EEG Emotion Recognition: State-of-the-Art Review and Unified FrameworkCode1
Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language UnderstandingCode1
PhysioSync: Temporal and Cross-Modal Contrastive Learning Inspired by Physiological Synchronization for EEG-Based Emotion RecognitionCode1
Show:102550
← PrevPage 2 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified