SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 801850 of 2041 papers

TitleStatusHype
Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP0
An experimental study in Real-time Facial Emotion Recognition on new 3RL dataset0
Transformer-based Self-supervised Multimodal Representation Learning for Wearable Emotion Recognition0
Metrics for Dataset Demographic Bias: A Case Study on Facial Expression RecognitionCode0
EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion RecognitionCode1
Depression detection in social media posts using affective and social norm features0
Decoupled Multimodal Distilling for Emotion RecognitionCode1
A novel facial emotion recognition model using segmentation VGG-19 architectureCode1
CNN-n-GRU: end-to-end speech emotion recognition from raw waveform signal using CNNs and gated recurrent unit networks0
Efficient Neural Architecture Search for Emotion Recognition0
Context De-confounded Emotion RecognitionCode1
EmotionIC: emotional inertia and contagion-driven dependency modeling for emotion recognition in conversationCode0
How People Respond to the COVID-19 Pandemic on Twitter: A Comparative Analysis of Emotional Expressions from US and India0
Memotion 3: Dataset on Sentiment and Emotion Analysis of Codemixed Hindi-English MemesCode0
Tollywood Emotions: Annotation of Valence-Arousal in Telugu Song Lyrics0
EmotiEffNet Facial Features in Uni-task Emotion Recognition in Video at ABAW-5 competition0
Reevaluating Data Partitioning for Emotion Detection in EmoWOZ0
Leveraging TCN and Transformer for effective visual-audio fusion in continuous emotion recognitionCode0
Exploring Large-scale Unlabeled Faces to Enhance Facial Expression Recognition0
Improving EEG-based Emotion Recognition by Fusing Time-frequency And Spatial Representations0
CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate0
A Deep-Learning-Based Neural Decoding Framework for Emotional Brain-Computer Interfaces0
Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis0
DWFormer: Dynamic Window transFormer for Speech Emotion RecognitionCode1
SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech ProcessingCode1
Using Auxiliary Tasks In Multimodal Fusion Of Wav2vec 2.0 And BERT For Multimodal Emotion Recognition0
A low latency attention module for streaming self-supervised speech representation learningCode0
Multi-Modality in Music: Predicting Emotion in Music from High-Level Audio Features and LyricsCode0
Partial Label Learning for Emotion Recognition from EEGCode1
Ensemble knowledge distillation of self-supervised speech models0
ChatGPT: Jack of all trades, master of noneCode1
Knowledge-aware Bayesian Co-attention for Multimodal Emotion Recognition0
Medical Face Masks and Emotion Recognition from the Body: Insights from a Deep Learning PerspectiveCode0
Gaussian-smoothed Imbalance Data Improves Speech Emotion Recognition0
Deep Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition0
Graph-Enhanced Emotion Neural DecodingCode0
NUAA-QMUL-AIIT at Memotion 3: Multi-modal Fusion with Squeeze-and-Excitation for Internet Meme Emotion AnalysisCode0
A Large-Scale Analysis of Persian Tweets Regarding Covid-19 Vaccination0
Cluster-Level Contrastive Learning for Emotion Recognition in ConversationsCode1
Audio Representation Learning by Distilling Video as Privileged Information0
cross-modal fusion techniques for utterance-level emotion recognition from text and speech0
deep learning of segment-level feature representation for speech emotion recognition in conversations0
CSAT‑FTCN: A Fuzzy‑Oriented Model with Contextual Self‑attention Network for Multimodal Emotion Recognition0
Facial Expression Recognition using Squeeze and Excitation-powered Swin Transformers0
BERT-ERC: Fine-tuning BERT is Enough for Emotion Recognition in Conversation0
Modulation spectral features for speech emotion recognition using deep neural networks0
LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks0
Emotion Recognition from Microblog Managing Emoticon with Text and Classifying using 1D CNN0
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring0
A Novel Exploitative and Explorative GWO-SVM Algorithm for Smart Emotion Recognition0
Show:102550
← PrevPage 17 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified