SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 12011225 of 2041 papers

TitleStatusHype
Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis0
x-enVENT: A Corpus of Event Descriptions with Experiencer-specific Emotion and Appraisal Annotations0
EEG based Emotion Recognition: A Tutorial and Review0
Emotion Recognition using Machine Learning and ECG signals0
Audiovisual Affect Assessment and Autonomous Automobiles: Applications0
Topological EEG Nonlinear Dynamics Analysis for Emotion Recognition0
EventFormer: AU Event Transformer for Facial Action Unit Event Detection0
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition0
Estimating the Uncertainty in Emotion Class Labels with Utterance-Specific Dirichlet Priors0
Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributesCode0
Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition0
TRILLsson: Distilled Universal Paralinguistic Speech Representations0
Towards a Common Speech Analysis Engine0
DAGAM: A Domain Adversarial Graph Attention Model for Subject Independent EEG-Based Emotion Recognition0
Novel techniques for improving NNetEn entropy calculation for short and noisy time series0
Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain AdaptationCode0
Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models0
Adults as Augmentations for Children in Facial Emotion Recognition with Contrastive Learning0
TamilEmo: Finegrained Emotion Detection Dataset for Tamil0
CALM: Contrastive Aligned Audio-Language Multirate and Multimodal Representations0
Speech Emotion Recognition using Self-Supervised Features0
LEAPMood: Light and Efficient Architecture to Predict Mood with Genetic Algorithm driven Hyperparameter Tuning0
Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors0
Speaker Normalization for Self-supervised Speech Emotion Recognition0
Self-supervised Graphs for Audio Representation Learning with Limited Labeled DataCode0
Show:102550
← PrevPage 49 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified