SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 201225 of 2041 papers

TitleStatusHype
HTNet for micro-expression recognitionCode1
Arabic Speech Emotion Recognition Employing Wav2vec2.0 and HuBERT Based on BAVED DatasetCode1
iMiGUE: An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion AnalysisCode1
Enhancing Speech Emotion Recognition Through Differentiable Architecture SearchCode1
InstructERC: Reforming Emotion Recognition in Conversation with Multi-task Retrieval-Augmented Large Language ModelsCode1
Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language AnalysisCode1
A cross-modal fusion network based on self-attention and residual structure for multimodal emotion recognitionCode1
CMCRD: Cross-Modal Contrastive Representation Distillation for Emotion RecognitionCode1
A Supervised Information Enhanced Multi-Granularity Contrastive Learning Framework for EEG Based Emotion RecognitionCode1
K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversationsCode1
Codified audio language modeling learns useful representations for music information retrievalCode1
Latent Distribution Decoupling: A Probabilistic Framework for Uncertainty-Aware Multimodal Emotion RecognitionCode1
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Dynamic Emotion Modeling with Learnable Graphs and Graph Inception NetworkCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion RecognitionCode1
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognitionCode1
LSSED: a large-scale dataset and benchmark for speech emotion recognitionCode1
CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion RecognitionCode1
Mamba-Enhanced Text-Audio-Video Alignment Network for Emotion Recognition in ConversationsCode1
ChatGPT: Jack of all trades, master of noneCode1
MDPE: A Multimodal Deception Dataset with Personality and Emotional CharacteristicsCode1
Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transferCode1
Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing ModalitiesCode1
CAGE: Circumplex Affect Guided Expression InferenceCode1
Show:102550
← PrevPage 9 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified