SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 401450 of 2041 papers

TitleStatusHype
Towards Practical Emotion Recognition: An Unsupervised Source-Free Approach for EEG Domain Adaptation0
Hierarchical Adaptive Expert for Multimodal Sentiment Analysis0
Large Language Models Meet Contrastive Learning: Zero-Shot Emotion Recognition Across LanguagesCode0
Deep Learning for Speech Emotion Recognition: A CNN Approach Utilizing Mel Spectrograms0
Modeling speech emotion with label variance and analyzing performance across speakers and unseen acoustic conditions0
Enhancing Multi-Label Emotion Analysis and Corresponding Intensities for Ethiopian Languages0
Coverage-Guaranteed Speech Emotion Recognition via Calibrated Uncertainty-Adaptive Prediction Sets0
FACE: Few-shot Adapter with Cross-view Fusion for Cross-subject EEG Emotion Recognition0
Feature-Based Dual Visual Feature Extraction Model for Compound Multimodal Emotion RecognitionCode0
Unifying EEG and Speech for Emotion Recognition: A Two-Step Joint Learning Framework for Handling Missing EEG Data During Inference0
Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics0
United we stand, Divided we fall: Handling Weak Complementary Relationships for Audio-Visual Emotion Recognition in Valence-Arousal Space0
Compound Expression Recognition via Large Vision-Language Models0
Mamba-VA: A Mamba-based Approach for Continuous Emotion Recognition in Valence-Arousal SpaceCode0
Technical Approach for the EMI Challenge in the 8th Affective Behavior Analysis in-the-Wild Competition0
Emotion Recognition with CLIP and Sequential Learning0
CULEMO: Cultural Lenses on Emotion -- Benchmarking LLMs for Cross-Cultural Emotion Understanding0
CALLM: Understanding Cancer Survivors' Emotions and Intervention Opportunities via Mobile Diaries and Context-Aware Language Models0
Synthetic Data Generation of Body Motion Data by Neural Gas Network for Emotion RecognitionCode0
Heterogeneous bimodal attention fusion for speech emotion recognition0
Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party Conversation Contexts0
Bimodal Connection Attention Fusion for Speech Emotion Recognition0
Personalized Emotion Detection from Floor Vibrations Induced by Footsteps0
Qieemo: Speech Is All You Need in the Emotion Recognition in Conversations0
ECG-EmotionNet: Nested Mixture of Expert (NMoE) Adaptation of ECG-Foundation Model for Driver Emotion Recognition0
Teleology-Driven Affective Computing: A Causal Framework for Sustained Well-Being0
Akan Cinematic Emotions (ACE): A Multimodal Multi-party Dataset for Emotion Recognition in Movie Dialogues0
Interpretable Concept-based Deep Learning Framework for Multimodal Human Behavior Modeling0
A Novel Dialect-Aware Framework for the Classification of Arabic Dialects and Emotions0
A Novel Approach to for Multimodal Emotion Recognition : Multimodal semantic information fusion0
Enhancing Higher Education with Generative AI: A Multimodal Approach for Personalised Learning0
RAMer: Reconstruction-based Adversarial Model for Multi-party Multi-modal Multi-label Emotion RecognitionCode0
EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models0
Emotion Recognition and Generation: A Comprehensive Review of Face, Speech, and Text Modalities0
Mini-ResEmoteNet: Leveraging Knowledge Distillation for Human-Centered Design0
Multimodal Magic Elevating Depression Detection with a Fusion of Text and Audio Intelligence0
Linguistic Analysis of Sinhala YouTube Comments on Sinhala Music Videos: A Dataset Study0
Divergent Emotional Patterns in Disinformation on Social Media? An Analysis of Tweets and TikToks about the DANA in Valencia0
Fuzzy-aware Loss for Source-free Domain Adaptation in Visual Emotion Recognition0
HumanOmni: A Large Vision-Speech Language Model for Human-Centric Video Understanding0
Cross-modal Context Fusion and Adaptive Graph Convolutional Network for Multimodal Conversational Emotion Recognition0
Adaptive Progressive Attention Graph Neural Network for EEG Emotion Recognition0
Why disentanglement-based speaker anonymization systems fail at preserving emotions?0
EmoFormer: A Text-Independent Speech Emotion Recognition using a Hybrid Transformer-CNN model0
EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network0
Representation Learning with Parameterised Quantum Circuits for Advancing Speech Emotion Recognition0
Uncertainty Estimation in the Real World: A Study on Music Emotion Recognition0
LLM supervised Pre-training for Multimodal Emotion Recognition in Conversations0
AIMA at SemEval-2024 Task 10: History-Based Emotion Recognition in Hindi-English Code-Mixed Conversations0
Omni-Emotion: Extending Video MLLM with Detailed Face and Audio Modeling for Multimodal Emotion Analysis0
Show:102550
← PrevPage 9 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified