SOTAVerified

Emotion Recognition

Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition

Papers

Showing 251300 of 2041 papers

TitleStatusHype
The MuSe 2021 Multimodal Sentiment Analysis Challenge: Sentiment, Emotion, Physiological-Emotion, and StressCode1
Emotion Recognition from Speech Using Wav2vec 2.0 EmbeddingsCode1
TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion RecognitionCode1
Pre-training strategies and datasets for facial representation learningCode1
Multimodal End-to-End Sparse Model for Emotion RecognitionCode1
EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion RecognitionCode1
Morphset:Augmenting categorical emotion datasets with dimensional affect labels using face morphingCode1
Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion RecognitionCode1
LSSED: a large-scale dataset and benchmark for speech emotion recognitionCode1
SpanEmo: Casting Multi-label Emotion Classification as Span-predictionCode1
Transformer-based approach towards music emotion recognition from lyricsCode1
A Hierarchical Transformer with Speaker Modeling for Emotion Recognition in ConversationCode1
DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion RecognitionCode1
MSAF: Multimodal Split Attention FusionCode1
Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in ConversationsCode1
Emotion Understanding in Videos Through Body, Context, and Visual-Semantic Embedding LossCode1
Seen and Unseen emotional style transfer for voice conversion with a new emotional speech datasetCode1
Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation LearningCode1
Neural Architecture Search of SPD Manifold NetworksCode1
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature FusionCode1
MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language SequencesCode1
Facial Emotion Recognition with Noisy Multi-task AnnotationsCode1
Modulated Fusion using Transformer for Linguistic-Acoustic Emotion RecognitionCode1
Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion RecognitionCode1
Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake Claim ClassificationCode1
A Efficient Multimodal Framework for Large Scale Emotion Recognition by Fusing Music and Electrodermal Activity SignalsCode1
Spatio-Temporal EEG Representation Learning on Riemannian Manifold and Euclidean SpaceCode1
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion RecognitionCode1
SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor!Code1
Speech Driven Talking Face Generation from a Single Image and an Emotion ConditionCode1
Dynamic Emotion Modeling with Learnable Graphs and Graph Inception NetworkCode1
Compact Graph Architecture for Speech Emotion RecognitionCode1
ECPE-2D: Emotion-Cause Pair Extraction based on Joint Two-Dimensional Representation, Interaction and PredictionCode1
A Transformer-based joint-encoding for Emotion Recognition and Sentiment AnalysisCode1
Emotion Recognition on large video dataset based on Convolutional Feature Extractor and Recurrent Neural NetworkCode1
Emotion Recognition in Audio and Video Using Deep Neural NetworksCode1
Real-time Facial Expression Recognition "In The Wild'' by Disentangling 3D Expression from IdentityCode1
Learning and Evaluating Emotion Lexicons for 91 LanguagesCode1
K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversationsCode1
ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational AgentsCode1
MuSe 2020 -- The First International Multimodal Sentiment Analysis in Real-life Media Challenge and WorkshopCode1
Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language AnalysisCode1
Deep Multilayer Perceptrons for Dimensional Speech Emotion RecognitionCode1
Context Based Emotion Recognition using EMOTIC DatasetCode1
Evaluation of Error and Correlation-Based Loss Functions For Multitask Learning Dimensional Speech Emotion RecognitionCode1
PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English PoetryCode1
Multi-Time-Scale Convolution for Emotion Recognition from Speech Audio SignalsCode1
ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot NavigationCode1
Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in ConversationCode1
Speech emotion recognition with deep convolutional neural networksCode1
Show:102550
← PrevPage 6 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1M2D-CLAPEmoA77.4Unverified
2M2D2EmoA76.7Unverified
3M2DEmoA76.1Unverified
4Jukebox (Pre-training: CALM)EmoA72.1Unverified
5CLMR (Pre-training: contrastive)EmoA67.8Unverified
#ModelMetricClaimedVerifiedStatus
1LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+AttentionAccuracy86.7Unverified
2MultiMAE-DERWAR83.61Unverified
3Intermediate-Attention-FusionAccuracy81.58Unverified
4Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedSTAccuracy80.08Unverified
5ERANN-0-4Accuracy74.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAGETop-3 Accuracy (%)14.73Unverified
2FocusCLIPTop-3 Accuracy (%)13.73Unverified
#ModelMetricClaimedVerifiedStatus
1VGG based5-class test accuracy66.13Unverified
#ModelMetricClaimedVerifiedStatus
1MaSaC-ERC-ZF1-score (Weighted)51.17Unverified
#ModelMetricClaimedVerifiedStatus
1BiHDMAccuracy40.34Unverified
#ModelMetricClaimedVerifiedStatus
1w2v2-L-robust-12Concordance correlation coefficient (CCC)0.64Unverified
#ModelMetricClaimedVerifiedStatus
14D-aNNAccuracy96.1Unverified
#ModelMetricClaimedVerifiedStatus
1CNN1'"1Unverified