SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 151200 of 2041 papers

TitleStatusHype
Partial Label Learning for Emotion Recognition from EEGCode1
ChatGPT: Jack of all trades, master of noneCode1
Cluster-Level Contrastive Learning for Emotion Recognition in ConversationsCode1
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine Learning Classification MethodologiesCode1
Multivariate, Multi-Frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in ConversationCode1
Large Raw Emotional Dataset with Aggregation MechanismCode1
A Persian ASR-based SER: Modification of Sharif Emotional Speech Database and Investigation of Persian Text CorporaCode1
YM2413-MDB: A Multi-Instrumental FM Video Game Music Dataset with Emotion AnnotationsCode1
Distribution-based Emotion Recognition in ConversationCode1
SPEAKER VGG CCT: Cross-corpus Speech Emotion Recognition with Speaker Embedding and Vision TransformersCode1
Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation FormatsCode1
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal RepresentationsCode1
GM-TCNet: Gated Multi-scale Temporal Convolutional Network using Emotion Causality for Speech Emotion RecognitionCode1
Leveraging Label Correlations in a Multi-label Setting: A Case Study in EmotionCode1
Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalitiesCode1
Empathetic Dialogue Generation via Sensitive Emotion Recognition and Sensible Knowledge SelectionCode1
Supervised Prototypical Contrastive Learning for Emotion Recognition in ConversationCode1
In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus StudyCode1
MuCDN: Mutual Conversational Detachment Network for Emotion Recognition in Multi-Party ConversationsCode1
Cross Task Neural Architecture Search for EEG Signal ClassificationsCode1
FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition InferenceCode1
Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-AttentionCode1
Non-Contrastive Self-Supervised Learning of Utterance-Level Speech RepresentationsCode1
Contextual Information and Commonsense Based Prompt for Emotion Recognition in ConversationCode1
GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information Fusion for Conversational Emotion DetectionCode1
Affective Behaviour Analysis Using Pretrained Model with Facial PrioriCode1
Multimodal Emotion Recognition with Modality-Pairwise Unsupervised Contrastive LossCode1
Self-supervised Group Meiosis Contrastive Learning for EEG-Based Emotion RecognitionCode1
GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion RecognitionCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and StressCode1
The Emotion is Not One-hot Encoding: Learning with Grayscale Label for Emotion Recognition in ConversationCode1
A Multimodal Corpus for Emotion Recognition in SarcasmCode1
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog DomainCode1
M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue DatabaseCode1
EmotionFlow: Capture the Dialogue Level Emotion TransitionsCode1
COGMEN: COntextualized GNN based Multimodal Emotion recognitioNCode1
Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature RepresentationCode1
GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion RecognitionCode1
Engagement Detection with Multi-Task Training in E-Learning EnvironmentsCode1
MMER: Multimodal Multi-task Learning for Speech Emotion RecognitionCode1
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationCode1
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-LabelingCode1
MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in ConversationsCode1
Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG SignalsCode1
Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responsesCode1
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?Code1
PARSE: Pairwise Alignment of Representations in Semi-Supervised EEG Learning for Emotion RecognitionCode1
Show:102550
← PrevPage 4 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified