SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 15261550 of 2041 papers

TitleStatusHype
A Multimodal Fusion Network For Student Emotion Recognition Based on Transformer and Tensor Product0
A Multi-Task Learning & Generation Framework: Valence-Arousal, Action Units & Primary Expressions0
A Multi-Task, Multi-Modal Approach for Predicting Categorical and Dimensional Emotions0
A Multi-View Sentiment Corpus0
AMuSE: Adaptive Multimodal Analysis for Speaker Emotion Recognition in Group Conversations0
An Adapter-Based Unified Model for Multiple Spoken Language Processing Tasks0
An adversarial learning framework for preserving users' anonymity in face-based emotion recognition0
An Affective Situation Labeling System from Psychological Behaviors in Emotion Recognition0
An Algerian Corpus and an Annotation Platform for Opinion and Emotion Analysis0
Analysis of Basic Emotions in Texts Based on BERT Vector Representation0
Analysis of constant-Q filterbank based representations for speech emotion recognition0
Analysis of Resource-efficient Predictive Models for Natural Language Processing0
Analyzing Emotions in Bangla Social Media Comments Using Machine Learning and LIME0
Analyzing Speech Unit Selection for Textless Speech-to-Speech Translation0
Analyzing the Affect of a Group of People Using Multi-modal Framework0
Analyzing the Influence of Dataset Composition for Emotion Recognition0
An analysis of large speech models-based representations for speech emotion recognition0
An Application of a Runtime Epistemic Probabilistic Event Calculus to Decision-making in e-Health Systems0
An Approach for Improving Automatic Mouth Emotion Recognition0
An Architecture for Accelerated Large-Scale Inference of Transformer-Based Language Models0
An Attribute-Aligned Strategy for Learning Speech Representation0
An Audio-Video Deep and Transfer Learning Framework for Multimodal Emotion Recognition in the wild0
An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis0
An Efficient End-to-End Transformer with Progressive Tri-modal Attention for Multi-modal Emotion Recognition0
An Empirical Study and Improvement for Speech Emotion Recognition0
Show:102550
← PrevPage 62 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified