SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 10511075 of 2041 papers

TitleStatusHype
Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature RepresentationCode1
Physically Disentangled RepresentationsCode0
Transformer-Based Self-Supervised Learning for Emotion Recognition0
Engagement Detection with Multi-Task Training in E-Learning EnvironmentsCode1
Emotional Speech Recognition with Pre-trained Deep Visual ModelsCode0
Learning Speech Emotion Representations in the Quaternion DomainCode0
Probing Speech Emotion Recognition Transformers for Linguistic Knowledge0
MMER: Multimodal Multi-task Learning for Speech Emotion RecognitionCode1
M-MELD: A Multilingual Multi-Party Dataset for Emotion Recognition in ConversationsCode0
Neural Architecture Search for Speech Emotion Recognition0
CTA-RNN: Channel and Temporal-wise Attention RNN Leveraging Pre-trained ASR Embeddings for Speech Emotion Recognition0
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationCode1
An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis0
Towards Transferable Speech Emotion Representation: On loss functions for cross-lingual latent representations0
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion RecognitionCode1
Continuous Metric Learning For Transferable Speech Emotion Recognition and Embedding Across Low-resource Languages0
A Dataset for Speech Emotion Recognition in Greek Theatrical PlaysCode0
A Speech Representation Anonymization Framework via Selective Noise PerturbationCode0
EmoCaps:Emotion Capsule based Model for Conversationl Emotion Recognition0
MDAN: Multi-level Dependent Attention Network for Visual Emotion Analysis0
Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile DevicesCode2
EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition0
EmotionNAS: Two-stream Neural Architecture Search for Speech Emotion Recognition0
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition0
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3Code1
Show:102550
← PrevPage 43 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified