SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 526550 of 2041 papers

TitleStatusHype
Parameter Efficient Finetuning for Speech Emotion Recognition and Domain Adaptation0
EmoBench: Evaluating the Emotional Intelligence of Large Language ModelsCode2
Ain't Misbehavin' -- Using LLMs to Generate Expressive Robot Behavior in Conversations with the Tabletop Robot Haru0
Personalized Large Language ModelsCode2
Multi-Modal Emotion Recognition by Text, Speech and Video Using Pretrained Transformers0
Persian Speech Emotion Recognition by Fine-Tuning Transformers0
CochCeps-Augment: A Novel Self-Supervised Contrastive Learning Using Cochlear Cepstrum-based Masking for Speech Emotion RecognitionCode0
Evaluation Metrics for Automated Typographic Poster GenerationCode0
English Prompts are Better for NLI-based Zero-Shot Emotion Classification than Target-Language Prompts0
Layer-Wise Analysis of Self-Supervised Acoustic Word Embeddings: A Study on Speech Emotion Recognition0
Graph Neural Networks in EEG-based Emotion Recognition: A Survey0
STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition0
Are Paralinguistic Representations all that is needed for Speech Emotion Recognition?0
FindingEmo: An Image Dataset for Emotion Recognition in the Wild0
LRDif: Diffusion Models for Under-Display Camera Emotion Recognition0
Neuromorphic Valence and Arousal Estimation0
Real-time EEG-based Emotion Recognition Model using Principal Component Analysis and Tree-based Models for NeurohumanitiesCode0
AMuSE: Adaptive Multimodal Analysis for Speaker Emotion Recognition in Group Conversations0
MF-AED-AEC: Speech Emotion Recognition by Leveraging Multimodal Fusion, Asr Error Detection, and Asr Error Correction0
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
Revealing Emotional Clusters in Speaker Embeddings: A Contrastive Learning Strategy for Speech Emotion Recognition0
Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition0
Self context-aware emotion perception on human-robot interaction0
Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation0
TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in ConversationCode1
Show:102550
← PrevPage 22 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified