SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 701725 of 2041 papers

TitleStatusHype
Context-Aware Siamese Networks for Efficient Emotion Recognition in Conversation0
Deep CNN with late fusion for realtime multimodal emotion recognition0
Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition0
Customising General Large Language Models for Specialised Emotion Recognition TasksCode0
Improving Personalisation in Valence and Arousal Prediction using Data Augmentation0
AIMDiT: Modality Augmentation and Interaction via Multimodal Dimension Transformation for Emotion Recognition in Conversations0
The Power of Properties: Uncovering the Influential Factors in Emotion Classification0
Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios0
What is Learnt by the LEArnable Front-end (LEAF)? Adapting Per-Channel Energy Normalisation (PCEN) to Noisy ConditionsCode0
Improving Facial Landmark Detection Accuracy and Efficiency with Knowledge Distillation0
nEMO: Dataset of Emotional Speech in PolishCode0
Dynamic Resolution Guidance for Facial Expression Recognition0
Music Recommendation Based on Facial Emotion Recognition0
IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion Recognition and Flip Reasoning in Conversations via Speaker Embeddings0
Towards Bi-Hemispheric Emotion Mapping through EEG: A Dual-Stream Neural Network Approach0
Exploring Emotions in Multi-componential Space using Interactive VR Games0
Affective-NLI: Towards Accurate and Interpretable Personality Recognition in ConversationCode0
Automated Assessment of Encouragement and Warmth in Classrooms Leveraging Multimodal Emotional Features and ChatGPT0
Heterogeneity over Homogeneity: Investigating Multilingual Speech Pre-Trained Models for Detecting Audio DeepfakeCode0
Targeted aspect-based emotion analysis to detect opportunities and precaution in financial Twitter messages0
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause0
Inclusive Design Insights from a Preliminary Image-Based Conversational Search Systems Evaluation0
Cross-Attention is Not Always Needed: Dynamic Cross-Attention for Audio-Visual Dimensional Emotion Recognition0
MMCert: Provable Defense against Adversarial Attacks to Multi-modal ModelsCode0
Fusion approaches for emotion recognition from speech using acoustic and text-based features0
Show:102550
← PrevPage 29 of 82Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified