SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 276300 of 2041 papers

TitleStatusHype
Personalized Speech Emotion Recognition in Human-Robot Interaction using Vision Transformers0
Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance0
TBDM-Net: Bidirectional Dense Networks with Gender Information for Speech Emotion RecognitionCode0
ReflectDiffu:Reflect between Emotion-intent Contagion and Mimicry for Empathetic Response Generation via a RL-Diffusion Framework0
Large Language Model Based Generative Error Correction: A Challenge and Baselines for Speech Recognition, Speaker Tagging, and Emotion Recognition0
Multi-Microphone and Multi-Modal Emotion Recognition in Reverberant Environment0
Turbo your multi-modal classification with contrastive learning0
Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic FeaturesCode0
PHemoNet: A Multimodal Network for Physiological SignalsCode2
Hierarchical Hypercomplex Network for Multimodal Emotion RecognitionCode2
Early Joint Learning of Emotion Information Makes MultiModal Model Understand You Better0
Recent Trends of Multimodal Affective Computing: A Survey from NLP PerspectiveCode2
Multimodal Emotion Recognition with Vision-language Prompting and Modality Dropout0
APEX: Attention on Personality based Emotion ReXgnition Framework0
Complex Emotion Recognition System using basic emotions via Facial Expression, EEG, and ECG Signals: a review0
Leveraging Content and Acoustic Representations for Speech Emotion RecognitionCode0
Consensus-based Distributed Quantum Kernel Learning for Speech Recognition0
Better Spanish Emotion Recognition In-the-wild: Bringing Attention to Deep Spectrum Voice Analysis0
Mamba-Enhanced Text-Audio-Video Alignment Network for Emotion Recognition in ConversationsCode1
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis0
Searching for Effective Preprocessing Method and CNN-based Architecture with Efficient Channel Attention on Speech Emotion Recognition0
ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion RecognitionCode0
Progressive Residual Extraction based Pre-training for Speech Representation Learning0
From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMsCode0
SpeechCaps: Advancing Instruction-Based Universal Speech Models with Multi-Talker Speaking Style CaptioningCode0
Show:102550
← PrevPage 12 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified