SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 9511000 of 2041 papers

TitleStatusHype
Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities0
Few-shot Learning in Emotion Recognition of Spontaneous Speech Using a Siamese Neural Network with Adaptive Sample Pair Formation0
Crowdsourcing a Word-Emotion Association Lexicon0
FindingEmo: An Image Dataset for Emotion Recognition in the Wild0
Finding Good Representations of Emotions for Text Classification0
Findings of the Shared Task on Emotion Analysis in Tamil0
Finding Task-specific Subnetworks in Multi-task Spoken Language Understanding Model0
Fine-Grained Emotion Detection in Health-Related Online Posts0
Dynamic Layer Customization for Noise Robust Speech Emotion Recognition in Heterogeneous Condition Training0
CSAT‑FTCN: A Fuzzy‑Oriented Model with Contextual Self‑attention Network for Multimodal Emotion Recognition0
Fine-tuning Wav2vec for Vocal-burst Emotion Recognition0
Fitting Different Interactive Information: Joint Classification of Emotion and Intention0
A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition0
Focal Loss based Residual Convolutional Neural Network for Speech Emotion Recognition0
Human Pose Descriptions and Subject-Focused Attention for Improved Zero-Shot Transfer in Human-Centric Classification Tasks0
CUET-NLP@TamilNLP-ACL2022: Multi-Class Textual Emotion Detection from Social Media using Transformer0
Forewords0
Fractal Dimension Pattern Based Multiresolution Analysis for Rough Estimator of Person-Dependent Audio Emotion Recognition0
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems0
A Comparative Study of Western and Chinese Classical Music based on Soundscape Models0
Dynamic Graph Neural ODE Network for Multi-modal Emotion Recognition in Conversation0
Framewise approach in multimodal emotion recognition in OMG challenge0
Adaptive Fusion Techniques for Multimodal Data0
Interpretable Deep Neural Networks for Facial Expression and Dimensional Emotion Recognition in-the-wild0
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems0
Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein Generative Adversarial Nets0
FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition0
Dynamic Causal Disentanglement Model for Dialogue Emotion Detection0
Fuse and Adapt: Investigating the Use of Pre-Trained Self-Supervising Learning Models in Limited Data NLU problems0
Fusing ASR Outputs in Joint Training for Speech Emotion Recognition0
Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos0
Fusion approaches for emotion recognition from speech using acoustic and text-based features0
Fusion of EEG and Musical Features in Continuous Music-emotion Recognition0
Fusion with Hierarchical Graphs for Mulitmodal Emotion Recognition0
Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for Children0
Fuzzy-aware Loss for Source-free Domain Adaptation in Visual Emotion Recognition0
BERT-ERC: Fine-tuning BERT is Enough for Emotion Recognition in Conversation0
Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep Generative Models0
An Architecture for Accelerated Large-Scale Inference of Transformer-Based Language Models0
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach0
GANSER: A Self-supervised Data Augmentation Framework for EEG-based Emotion Recognition0
GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations0
Technical Approach for the EMI Challenge in the 8th Affective Behavior Analysis in-the-Wild Competition0
An Approach for Improving Automatic Mouth Emotion Recognition0
Gaze-enhanced Crossmodal Embeddings for Emotion Recognition0
GCM-Net: Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network for Video Sentiment and Emotion Analysis0
GEmo-CLAP: Gender-Attribute-Enhanced Contrastive Language-Audio Pretraining for Accurate Speech Emotion Recognition0
General Purpose Textual Sentiment Analysis and Emotion Detection Tools0
Dual Prototyping with Domain and Class Prototypes for Affective Brain-Computer Interface in Unseen Target Conditions0
Dual-GAN: Joint BVP and Noise Modeling for Remote Physiological Measurement0
Show:102550
← PrevPage 20 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified