SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 10511100 of 2041 papers

TitleStatusHype
Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature RepresentationCode1
Physically Disentangled RepresentationsCode0
Transformer-Based Self-Supervised Learning for Emotion Recognition0
Engagement Detection with Multi-Task Training in E-Learning EnvironmentsCode1
Emotional Speech Recognition with Pre-trained Deep Visual ModelsCode0
Learning Speech Emotion Representations in the Quaternion DomainCode0
Probing Speech Emotion Recognition Transformers for Linguistic Knowledge0
MMER: Multimodal Multi-task Learning for Speech Emotion RecognitionCode1
M-MELD: A Multilingual Multi-Party Dataset for Emotion Recognition in ConversationsCode0
Neural Architecture Search for Speech Emotion Recognition0
CTA-RNN: Channel and Temporal-wise Attention RNN Leveraging Pre-trained ASR Embeddings for Speech Emotion Recognition0
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationCode1
An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis0
Towards Transferable Speech Emotion Representation: On loss functions for cross-lingual latent representations0
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Continuous Metric Learning For Transferable Speech Emotion Recognition and Embedding Across Low-resource Languages0
A Dataset for Speech Emotion Recognition in Greek Theatrical PlaysCode0
A Speech Representation Anonymization Framework via Selective Noise PerturbationCode0
EmoCaps:Emotion Capsule based Model for Conversationl Emotion Recognition0
MDAN: Multi-level Dependent Attention Network for Visual Emotion Analysis0
Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile DevicesCode2
EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition0
EmotionNAS: Two-stream Neural Architecture Search for Speech Emotion Recognition0
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition0
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Multitask Emotion Recognition Model with Knowledge Distillation and Task Discriminator0
Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis0
x-enVENT: A Corpus of Event Descriptions with Experiencer-specific Emotion and Appraisal Annotations0
EEG based Emotion Recognition: A Tutorial and Review0
Emotion Recognition using Machine Learning and ECG signals0
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-LabelingCode1
Topological EEG Nonlinear Dynamics Analysis for Emotion Recognition0
Audiovisual Affect Assessment and Autonomous Automobiles: Applications0
Dawn of the transformer era in speech emotion recognition: closing the valence gapCode2
EventFormer: AU Event Transformer for Facial Action Unit Event Detection0
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition0
Estimating the Uncertainty in Emotion Class Labels with Utterance-Specific Dirichlet Priors0
Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributesCode0
MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in ConversationsCode1
Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition0
TRILLsson: Distilled Universal Paralinguistic Speech Representations0
Towards a Common Speech Analysis Engine0
DAGAM: A Domain Adversarial Graph Attention Model for Subject Independent EEG-Based Emotion Recognition0
Novel techniques for improving NNetEn entropy calculation for short and noisy time series0
Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG SignalsCode1
Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain AdaptationCode0
Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responsesCode1
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?Code1
Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models0
PARSE: Pairwise Alignment of Representations in Semi-Supervised EEG Learning for Emotion RecognitionCode1
Show:102550
← PrevPage 22 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified