SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 13511375 of 2041 papers

TitleStatusHype
Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis0
Unifying the Discrete and Continuous Emotion labels for Speech Emotion Recognition0
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause0
Unimodal-driven Distillation in Multimodal Emotion Recognition with Dynamic Fusion0
Unlocking the Emotional World of Visual Media: An Overview of the Science, Research, and Impact of Understanding Emotion0
Unsupervised Counselor Dialogue Clustering for Positive Emotion Elicitation in Neural Dialogue System0
Unsupervised Cross-Lingual Speech Emotion Recognition Using DomainAdversarial Neural Network0
Unsupervised Discovery of Facial Landmarks and Head Pose0
Unsupervised Learning in Reservoir Computing for EEG-based Emotion Recognition0
Unsupervised low-rank representations for speech emotion recognition0
Unsupervised Multimodal Language Representations using Convolutional Autoencoders0
Unsupervised Multi-Modal Representation Learning for Affective Computing with Multi-Corpus Wearable Data0
Unsupervised Personalization of an Emotion Recognition System: The Unique Properties of the Externalization of Valence in Speech0
Unsupervised Representation Learning with Future Observation Prediction for Speech Emotion Recognition0
Unsupervised Representations Improve Supervised Learning in Speech Emotion Recognition0
Unveiling Emotions from EEG: A GRU-Based Approach0
Usefulness of Emotional Prosody in Neural Machine Translation0
Use of Affective Visual Information for Summarization of Human-Centric Videos0
Use of Variational Inference in Music Emotion Recognition0
User independent Emotion Recognition with Residual Signal-Image Network0
User profile-driven large-scale multi-agent learning from demonstration in federated human-robot collaborative environments0
USI-IR at IEST 2018: Sequence Modeling and Pseudo-Relevance Feedback for Implicit Emotion Detection0
Using Auxiliary Tasks In Multimodal Fusion Of Wav2vec 2.0 And BERT For Multimodal Emotion Recognition0
Using Extracted Emotion Cause to Improve Content-Relevance for Empathetic Conversation Generation0
Using Hankel Matrices for Dynamics-based Facial Emotion Recognition and Pain Detection0
Show:102550
← PrevPage 55 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified