SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 101150 of 2041 papers

TitleStatusHype
Disentangled Variational Autoencoder for Emotion Recognition in ConversationsCode1
Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language UnderstandingCode1
ECPE-2D: Emotion-Cause Pair Extraction based on Joint Two-Dimensional Representation, Interaction and PredictionCode1
Emotion Understanding in Videos Through Body, Context, and Visual-Semantic Embedding LossCode1
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog DomainCode1
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Engagement Detection with Multi-Task Training in E-Learning EnvironmentsCode1
Enhancing Modal Fusion by Alignment and Label Matching for Multimodal Emotion RecognitionCode1
Ethics Sheet for Automatic Emotion Recognition and Sentiment AnalysisCode1
Ethics Sheets for AI TasksCode1
Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transferCode1
Exploring Remote Physiological Signal Measurement under Dynamic Lighting Conditions at Night: Dataset, Experiment, and AnalysisCode1
Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognitionCode1
Facial Affective Behavior Analysis with Instruction TuningCode1
Beyond Silent Letters: Amplifying LLMs in Emotion Recognition with Vocal NuancesCode1
Facial Emotion Recognition with Noisy Multi-task AnnotationsCode1
Multitask Emotion Recognition with Incomplete LabelsCode1
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine Learning Classification MethodologiesCode1
ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational AgentsCode1
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion RecognitionCode1
GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information Fusion for Conversational Emotion DetectionCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion RecognitionCode1
A Multimodal Corpus for Emotion Recognition in SarcasmCode1
GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion RecognitionCode1
GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective ComputingCode1
Cross Task Neural Architecture Search for EEG Signal ClassificationsCode1
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion RecognitionCode1
Crowdsourced and Automatic Speech Prominence EstimationCode1
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Contrast and Generation Make BART a Good Dialogue Emotion RecognizerCode1
Cooperative Sentiment Agents for Multimodal Sentiment AnalysisCode1
Contextual Information and Commonsense Based Prompt for Emotion Recognition in ConversationCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Cross Attentional Audio-Visual Fusion for Dimensional Emotion RecognitionCode1
Deep Multilayer Perceptrons for Dimensional Speech Emotion RecognitionCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
ChatGPT: Jack of all trades, master of noneCode1
Cluster-Level Contrastive Learning for Emotion Recognition in ConversationsCode1
CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion RecognitionCode1
CAGE: Circumplex Affect Guided Expression InferenceCode1
Compact Graph Architecture for Speech Emotion RecognitionCode1
CoMPM: Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
Context Based Emotion Recognition using EMOTIC DatasetCode1
Context De-confounded Emotion RecognitionCode1
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion RecognitionCode1
Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality InteractionCode1
CMCRD: Cross-Modal Contrastive Representation Distillation for Emotion RecognitionCode1
BiosERC: Integrating Biography Speakers Supported by LLMs for ERC TasksCode1
Show:102550
← PrevPage 3 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified