SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 16011625 of 2041 papers

TitleStatusHype
Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models0
Multimodal Emotion Recognition with Vision-language Prompting and Modality Dropout0
Multimodal End-to-End Group Emotion Recognition using Cross-Modal Attention0
Multimodal fusion via cortical network inspired losses0
Multimodal Fusion with Deep Neural Networks for Audio-Video Emotion Recognition0
Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features0
Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph0
Multimodal Latent Emotion Recognition from Micro-expression and Physiological Signals0
Multimodal Local-Global Ranking Fusion for Emotion Recognition0
Multimodal Mixture of Low-Rank Experts for Sentiment Analysis and Emotion Recognition0
Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition0
Multimodal Prompt Transformer with Hybrid Contrastive Learning for Emotion Recognition in Conversation0
Multimodal Relational Tensor Network for Sentiment and Emotion Classification0
Multimodal Representation Learning Techniques for Comprehensive Facial State Analysis0
Multi-modal Residual Perceptron Network for Audio-Video Emotion Recognition0
Multimodal Sentiment Analysis based on Video and Audio Inputs0
Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition0
Multimodal Speech Emotion Recognition using Cross Attention with Aligned Audio and Text0
Multimodal Stress Detection Using Facial Landmarks and Biometric Signals0
Multiple Riemannian Manifold-valued Descriptors based Image Set Classification with Multi-Kernel Metric Learning0
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations0
Multiscale Fractal Analysis on EEG Signals for Music-Induced Emotion Recognition0
Multi-Scale Temporal Transformer For Speech Emotion Recognition0
Multi-Source Domain Adaptation with Transformer-based Feature Generation for Subject-Independent EEG-based Emotion Recognition0
Multi-Source EEG Emotion Recognition via Dynamic Contrastive Domain Adaptation0
Show:102550
← PrevPage 65 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified