SOTAVerified

Emotion Recognition in Conversation

Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .

Papers

Showing 6170 of 141 papers

TitleStatusHype
Dynamic Graph Neural ODE Network for Multi-modal Emotion Recognition in Conversation0
CMATH: Cross-Modality Augmented Transformer with Hierarchical Variational Distillation for Multimodal Emotion Recognition in Conversation0
NUS-Emo at SemEval-2024 Task 3: Instruction-Tuning LLM for Multimodal Emotion-Cause Analysis in Conversations0
Masked Graph Learning with Recurrent Alignment for Multimodal Emotion Recognition in Conversation0
MasonTigers at SemEval-2024 Task 10: Emotion Discovery and Flip Reasoning in Conversation with Ensemble of Transformers and Prompting0
Efficient Long-distance Latent Relation-aware Graph Neural Network for Multi-modal Emotion Recognition in Conversations0
FeedForward at SemEval-2024 Task 10: Trigger and sentext-height enriched emotion analysis in multi-party conversationsCode0
Enhancing Emotion Recognition in Conversation through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning0
Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum0
Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion0
Show:102550
← PrevPage 7 of 15Next →

No leaderboard results yet.