SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 251275 of 2041 papers

TitleStatusHype
Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake Claim ClassificationCode1
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning SettingsCode1
Multitask Emotion Recognition with Incomplete LabelsCode1
SER Evals: In-domain and Out-of-domain Benchmarking for Speech Emotion RecognitionCode1
BHAAV- A Text Corpus for Emotion Analysis from Hindi StoriesCode1
BiosERC: Integrating Biography Speakers Supported by LLMs for ERC TasksCode1
FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition InferenceCode1
A vector quantized masked autoencoder for speech emotion recognitionCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG SignalsCode1
Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion RecognitionCode1
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion RecognitionCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
GiMeFive: Towards Interpretable Facial Emotion ClassificationCode1
Few-Shot Emotion Recognition in Conversation with Sequential Prototypical NetworksCode1
GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion RecognitionCode1
CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion RecognitionCode1
GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective ComputingCode1
Continuous Emotion Recognition with Audio-visual Leader-follower Attentive FusionCode1
CAGE: Circumplex Affect Guided Expression InferenceCode1
Cluster-Level Contrastive Learning for Emotion Recognition in ConversationsCode1
Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-AttentionCode1
ChatGPT: Jack of all trades, master of noneCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information Fusion for Conversational Emotion DetectionCode1
Show:102550
← PrevPage 11 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified