SOTAVerified

Emotion Recognition in Conversation

Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .

Papers

Showing 2130 of 141 papers

TitleStatusHype
Context-Aware Siamese Networks for Efficient Emotion Recognition in Conversation0
Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression RecognitionCode1
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause0
Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in ConversationCode1
CKERC : Joint Large Language Models with Commonsense Knowledge for Emotion Recognition in Conversation0
SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF)Code0
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion RecognitionCode1
TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in ConversationCode1
Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimodal Emotion RecognitionCode1
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language ModelsCode3
Show:102550
← PrevPage 3 of 15Next →

No leaderboard results yet.