SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 401425 of 2041 papers

TitleStatusHype
Are Generative Language Models Multicultural? A Study on Hausa Culture and Emotions using ChatGPT0
Context-Aware Siamese Networks for Efficient Emotion Recognition in Conversation0
A Real Time Facial Expression Classification System Using Local Binary Patterns0
AIMA at SemEval-2024 Task 10: History-Based Emotion Recognition in Hindi-English Code-Mixed Conversations0
Crowdsourcing a Word-Emotion Association Lexicon0
Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis0
A cross-corpus study on speech emotion recognition0
Context-aware Cascade Attention-based RNN for Video Emotion Recognition0
A Question Answering Approach for Emotion Cause Extraction0
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition0
Are Large Language Models More Empathetic than Humans?0
Context-Dependent Models for Predicting and Characterizing Facial Expressiveness0
Are Mamba-based Audio Foundation Models the Best Fit for Non-Verbal Emotion Recognition?0
Context-LGM: Leveraging Object-Context Relation for Context-Aware Object Recognition0
AI in Pursuit of Happiness, Finding Only Sadness: Multi-Modal Facial Emotion Recognition Challenge0
Contextual Dependencies in Time-Continuous Multidimensional Affect Recognition0
Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition0
Contextual Emotion Recognition using Large Vision Language Models0
Contemplating Visual Emotions: Understanding and Overcoming Dataset Bias0
Contextualized Emotion Recognition in Conversation as Sequence Tagging0
Ain't Misbehavin' -- Using LLMs to Generate Expressive Robot Behavior in Conversations with the Tabletop Robot Haru0
Construction of Japanese Audio-Visual Emotion Database and Its Application in Emotion Recognition0
A Quantum Probability Driven Framework for Joint Multi-Modal Sarcasm, Sentiment and Emotion Analysis0
Continuous Emotion Recognition with Spatiotemporal Convolutional Neural Networks0
Construction of English-French Multimodal Affective Conversational Corpus from TV Dramas0
Show:102550
← PrevPage 17 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified