SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 426450 of 2041 papers

TitleStatusHype
Teleology-Driven Affective Computing: A Causal Framework for Sustained Well-Being0
Akan Cinematic Emotions (ACE): A Multimodal Multi-party Dataset for Emotion Recognition in Movie Dialogues0
Interpretable Concept-based Deep Learning Framework for Multimodal Human Behavior Modeling0
A Novel Dialect-Aware Framework for the Classification of Arabic Dialects and Emotions0
A Novel Approach to for Multimodal Emotion Recognition : Multimodal semantic information fusion0
Enhancing Higher Education with Generative AI: A Multimodal Approach for Personalised Learning0
RAMer: Reconstruction-based Adversarial Model for Multi-party Multi-modal Multi-label Emotion RecognitionCode0
EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models0
Emotion Recognition and Generation: A Comprehensive Review of Face, Speech, and Text Modalities0
Mini-ResEmoteNet: Leveraging Knowledge Distillation for Human-Centered Design0
Multimodal Magic Elevating Depression Detection with a Fusion of Text and Audio Intelligence0
Linguistic Analysis of Sinhala YouTube Comments on Sinhala Music Videos: A Dataset Study0
Divergent Emotional Patterns in Disinformation on Social Media? An Analysis of Tweets and TikToks about the DANA in Valencia0
Fuzzy-aware Loss for Source-free Domain Adaptation in Visual Emotion Recognition0
HumanOmni: A Large Vision-Speech Language Model for Human-Centric Video Understanding0
Cross-modal Context Fusion and Adaptive Graph Convolutional Network for Multimodal Conversational Emotion Recognition0
Adaptive Progressive Attention Graph Neural Network for EEG Emotion Recognition0
Why disentanglement-based speaker anonymization systems fail at preserving emotions?0
EmoFormer: A Text-Independent Speech Emotion Recognition using a Hybrid Transformer-CNN model0
EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network0
Representation Learning with Parameterised Quantum Circuits for Advancing Speech Emotion Recognition0
Uncertainty Estimation in the Real World: A Study on Music Emotion Recognition0
LLM supervised Pre-training for Multimodal Emotion Recognition in Conversations0
AIMA at SemEval-2024 Task 10: History-Based Emotion Recognition in Hindi-English Code-Mixed Conversations0
Omni-Emotion: Extending Video MLLM with Detailed Face and Audio Modeling for Multimodal Emotion Analysis0
Show:102550
← PrevPage 18 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified