SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 376400 of 2041 papers

TitleStatusHype
VAEmo: Efficient Representation Learning for Visual-Audio Emotion with Knowledge InjectionCode0
Emotions in the Loop: A Survey of Affective Computing for Emotional Support0
BERSting at the Screams: A Benchmark for Distanced, Emotional and Shouted Speech RecognitionCode0
Spatiotemporal Emotional Synchrony in Dyadic Interactions: The Role of Speech Conditions in Facial and Vocal Affective Alignment0
Emotion Recognition in Contemporary Dance Performances Using Laban Movement Analysis0
DB-GNN: Dual-Branch Graph Neural Network with Multi-Level Contrastive Learning for Jointly Identifying Within- and Cross-Frequency Coupled Brain Networks0
Towards Robust Multimodal Physiological Foundation Models: Handling Arbitrary Missing Modalities0
Real-Time Imitation of Human Head Motions, Blinks and Emotions by Nao Robot: A Closed-Loop Approach0
ClimaEmpact: Domain-Aligned Small Language Models and Datasets for Extreme Weather Analytics0
Optimism, Expectation, or Sarcasm? Multi-Class Hope Speech Detection in Spanish and English0
Visual and textual prompts for enhancing emotion recognition in video0
PsyCounAssist: A Full-Cycle AI-Powered Psychological Counseling Assistant System0
Facial Geometric Feature Extraction for Dimensional Emotion Analysis Using Genetic Programming0
Multimodal Representation Learning Techniques for Comprehensive Facial State Analysis0
Attributes-aware Visual Emotion Representation Learning0
Leveraging Label Potential for Enhanced Multimodal Emotion Recognition0
Emotion Recognition Using Convolutional Neural Networks0
BeMERC: Behavior-Aware MLLM-based Framework for Multimodal Emotion Recognition in Conversation0
Unimodal-driven Distillation in Multimodal Emotion Recognition with Dynamic Fusion0
M2D2: Exploring General-purpose Audio-Language Representations Beyond CLAP0
Modeling Challenging Patient Interactions: LLMs for Medical Communication Training0
Hybrid Emotion Recognition: Enhancing Customer Interactions Through Acoustic and Textual Analysis0
Leveraging LLMs with Iterative Loop Structure for Enhanced Social Intelligence in Video Question Answering0
OmniVox: Zero-Shot Emotion Recognition with Omni-LLMs0
GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations0
Show:102550
← PrevPage 16 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified