SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 10011050 of 2041 papers

TitleStatusHype
Multi-modal Residual Perceptron Network for Audio-Video Emotion Recognition0
Multimodal Sentiment Analysis based on Video and Audio Inputs0
Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition0
Multimodal Speech Emotion Recognition using Cross Attention with Aligned Audio and Text0
Multimodal Stress Detection Using Facial Landmarks and Biometric Signals0
Multiple Riemannian Manifold-valued Descriptors based Image Set Classification with Multi-Kernel Metric Learning0
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations0
Multiscale Fractal Analysis on EEG Signals for Music-Induced Emotion Recognition0
Multi-Scale Temporal Transformer For Speech Emotion Recognition0
Multi-Source Domain Adaptation with Transformer-based Feature Generation for Subject-Independent EEG-based Emotion Recognition0
Multi-Source EEG Emotion Recognition via Dynamic Contrastive Domain Adaptation0
Multistage linguistic conditioning of convolutional layers for speech emotion recognition0
Multi-stream Attention-based BLSTM with Feature Segmentation for Speech Emotion Recognition0
Affective Behavior Analysis using Action Unit Relation Graph and Multi-task Cross Attention0
Multitask Emotion Recognition Model with Knowledge Distillation and Task Discriminator0
Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause Extraction0
Multi-Task Learning for Affect Analysis0
Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis0
Multi-Task Learning with Sentiment, Emotion, and Target Detection to Recognize Hate Speech and Offensive Language0
Multi-task, multi-label and multi-domain learning with residual convolutional networks for emotion recognition0
Multi-Task Self-Supervised Pre-Training for Music Classification0
Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers0
Multi-view Laplacian Eigenmaps Based on Bag-of-Neighbors For RGBD Human Emotion Recognition0
Multi-View Multi-Task Modeling with Speech Foundation Models for Speech Forensic Tasks0
Multi-Window Data Augmentation Approach for Speech Emotion Recognition0
MuSE-ing on the Impact of Utterance Ordering On Crowdsourced Emotion Annotations0
MUSER: MUltimodal Stress Detection using Emotion Recognition as an Auxiliary Task0
Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning0
Music Interpretation and Emotion Perception: A Computational and Neurophysiological Investigation0
Music Recommendation Based on Facial Emotion Recognition0
Mutux at SemEval-2018 Task 1: Exploring Impacts of Context Information On Emotion Detection0
MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition0
MVP: Multimodal Emotion Recognition based on Video and Physiological Signals0
My Words Imply Your Opinion: Reader Agent-Based Propagation Enhancement for Personalized Implicit Emotion Analysis0
Naturalistic Audio-Visual Emotion Database0
Natural Language Processing for Cognitive Analysis of Emotions0
Neural Architecture Search for Speech Emotion Recognition0
Neural Dependency Coding inspired Multimodal Fusion0
Neural Network architectures to classify emotions in Indian Classical Music0
Neuromorphic Valence and Arousal Estimation0
New Approach for an Affective Computing-Driven Quality of Experience (QoE) Prediction0
NLP meets psychotherapy: Using predicted client emotions and self-reported client emotions to measure emotional coherence0
Noise-Resistant Multimodal Transformer for Emotion Recognition0
Noise robust speech emotion recognition with signal-to-noise ratio adapting speech enhancement0
Non-Contrastive Self-supervised Learning for Utterance-Level Information Extraction from Speech0
Non-linear frequency warping using constant-Q transformation for speech emotion recognition0
Non-Volume Preserving-based Fusion to Group-Level Emotion Recognition on Crowd Videos0
Normalization Before Shaking Toward Learning Symmetrically Distributed Representation Without Margin in Speech Emotion Recognition0
Novel Dual-Channel Long Short-Term Memory Compressed Capsule Networks for Emotion Recognition0
Novel techniques for improving NNetEn entropy calculation for short and noisy time series0
Show:102550
← PrevPage 21 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified