SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 15511575 of 2041 papers

TitleStatusHype
Mitigating Group Bias in Federated Learning for Heterogeneous Devices0
Mitigating Subgroup Disparities in Multi-Label Speech Emotion Recognition: A Pseudo-Labeling and Unsupervised Learning Approach0
MixedEmotions: Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets0
MMDS: A Multimodal Medical Diagnosis System Integrating Image Analysis and Knowledge-based Departmental Consultation0
MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation0
MMTF-DES: A Fusion of Multimodal Transformer Models for Desire, Emotion, and Sentiment Analysis of Social Media Data0
Modality-based Factorization for Multimodal Fusion0
Modality Influence in Multimodal Machine Learning0
Modeling Feature Representations for Affective Speech using Generative Adversarial Networks0
Modeling speech emotion with label variance and analyzing performance across speakers and unseen acoustic conditions0
Modelling Emotion Dynamics in Song Lyrics with State Space Models0
Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics0
Modelling Representation Noise in Emotion Analysis using Gaussian Processes0
Modelling Temporal Information Using Discrete Fourier Transform for Recognizing Emotions in User-generated Videos0
Modulation spectral features for speech emotion recognition using deep neural networks0
Mouth Articulation-Based Anchoring for Improved Cross-Corpus Speech Emotion Recognition0
MSAC: Multiple Speech Attribute Control Method for Reliable Speech Emotion Recognition0
MSA-GCN:Multiscale Adaptive Graph Convolution Network for Gait Emotion Recognition0
MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling0
MSP-Podcast SER Challenge 2024: L'antenne du Ventoux Multimodal Self-Supervised Learning for Speech Emotion Recognition0
MUCS@DravidianLangTech@ACL2022: Ensemble of Logistic Regression Penalties to Identify Emotions in Tamil Text0
Multi-Branch Deep Radial Basis Function Networks for Facial Emotion Recognition0
Multi-channel Emotion Analysis for Consensus Reaching in Group Movie Recommendation Systems0
Multi-Classifier Interactive Learning for Ambiguous Speech Emotion Recognition0
Multi-Cue Adaptive Emotion Recognition Network0
Show:102550
← PrevPage 63 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified