SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 101150 of 2041 papers

TitleStatusHype
Context De-confounded Emotion RecognitionCode1
Context Based Emotion Recognition using EMOTIC DatasetCode1
A cross-modal fusion network based on self-attention and residual structure for multimodal emotion recognitionCode1
Exploring Visual Engagement Signals for Representation LearningCode1
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog DomainCode1
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Contrast and Generation Make BART a Good Dialogue Emotion RecognizerCode1
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Facial Emotion Recognition: State of the Art Performance on FER2013Code1
Facial Emotion Recognition Using Transfer Learning in the Deep CNNCode1
Codified audio language modeling learns useful representations for music information retrievalCode1
Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in ConversationCode1
Emotion Recognition from Speech Using Wav2vec 2.0 EmbeddingsCode1
Cross Attentional Audio-Visual Fusion for Dimensional Emotion RecognitionCode1
Beyond Silent Letters: Amplifying LLMs in Emotion Recognition with Vocal NuancesCode1
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion RecognitionCode1
Engagement Detection with Multi-Task Training in E-Learning EnvironmentsCode1
ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational AgentsCode1
Crowdsourced and Automatic Speech Prominence EstimationCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion RecognitionCode1
Global-Local Attention for Emotion RecognitionCode1
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine Learning Classification MethodologiesCode1
A Multimodal Corpus for Emotion Recognition in SarcasmCode1
Decoupled Multimodal Distilling for Emotion RecognitionCode1
GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective ComputingCode1
EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion RecognitionCode1
Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG SignalsCode1
A vector quantized masked autoencoder for speech emotion recognitionCode1
EmoDynamiX: Emotional Support Dialogue Strategy Prediction by Modelling MiXed Emotions and Discourse DynamicsCode1
EmoNeXt: an Adapted ConvNeXt for Facial Emotion RecognitionCode1
Continuous Emotion Recognition with Audio-visual Leader-follower Attentive FusionCode1
emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion RecognitionCode1
BHAAV- A Text Corpus for Emotion Analysis from Hindi StoriesCode1
Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-AttentionCode1
EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTaCode1
Emo-DNA: Emotion Decoupling and Alignment Learning for Cross-Corpus Speech Emotion RecognitionCode1
EmoSpeech: Guiding FastSpeech2 Towards Emotional Text to SpeechCode1
EEG-Based Emotion Recognition Using Regularized Graph Neural NetworksCode1
ECPE-2D: Emotion-Cause Pair Extraction based on Joint Two-Dimensional Representation, Interaction and PredictionCode1
EEG-Based Emotion Recognition Using Genetic Algorithm Optimized Multi-Layer PerceptronCode1
DWFormer: Dynamic Window transFormer for Speech Emotion RecognitionCode1
A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in ConversationsCode1
A Transformer-based joint-encoding for Emotion Recognition and Sentiment AnalysisCode1
A Supervised Information Enhanced Multi-Granularity Contrastive Learning Framework for EEG Based Emotion RecognitionCode1
EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion RecognitionCode1
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning SettingsCode1
A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS datasetCode1
A Persian ASR-based SER: Modification of Sharif Emotional Speech Database and Investigation of Persian Text CorporaCode1
Arabic Speech Emotion Recognition Employing Wav2vec2.0 and HuBERT Based on BAVED DatasetCode1
Show:102550
← PrevPage 3 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified