SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 9761000 of 2041 papers

TitleStatusHype
Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein Generative Adversarial Nets0
FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition0
Dynamic Causal Disentanglement Model for Dialogue Emotion Detection0
Fuse and Adapt: Investigating the Use of Pre-Trained Self-Supervising Learning Models in Limited Data NLU problems0
Fusing ASR Outputs in Joint Training for Speech Emotion Recognition0
Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos0
Fusion approaches for emotion recognition from speech using acoustic and text-based features0
Fusion of EEG and Musical Features in Continuous Music-emotion Recognition0
Fusion with Hierarchical Graphs for Mulitmodal Emotion Recognition0
Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for Children0
Fuzzy-aware Loss for Source-free Domain Adaptation in Visual Emotion Recognition0
BERT-ERC: Fine-tuning BERT is Enough for Emotion Recognition in Conversation0
Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep Generative Models0
An Architecture for Accelerated Large-Scale Inference of Transformer-Based Language Models0
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach0
GANSER: A Self-supervised Data Augmentation Framework for EEG-based Emotion Recognition0
GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations0
Technical Approach for the EMI Challenge in the 8th Affective Behavior Analysis in-the-Wild Competition0
An Approach for Improving Automatic Mouth Emotion Recognition0
Gaze-enhanced Crossmodal Embeddings for Emotion Recognition0
GCM-Net: Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network for Video Sentiment and Emotion Analysis0
GEmo-CLAP: Gender-Attribute-Enhanced Contrastive Language-Audio Pretraining for Accurate Speech Emotion Recognition0
General Purpose Textual Sentiment Analysis and Emotion Detection Tools0
Dual Prototyping with Domain and Class Prototypes for Affective Brain-Computer Interface in Unseen Target Conditions0
Dual-GAN: Joint BVP and Noise Modeling for Remote Physiological Measurement0
Show:102550
← PrevPage 40 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified