EmoCaps:Emotion Capsule based Model for Conversationl Emotion Recognition
Zaijing Li1, Fengxiao Tang1*, Ming Zhao1*, Yusen Zhu2
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Emotion recognition in conversation (ERC) aims to analyze the speaker’s state and iden tify their emotion in the conversation. Re cent works in ERC focus on context model ing but ignore the representation of contextual emotional tendency. In order to extract multi modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi modal emotion vectors from different modal ities and fuse them with sentence vector to be an emotion capsule. Furthermore, we de sign an end-to-end ERC model called Emo Caps, which extracts emotion vectors through the Emoformer structure and obtain the emo tion classification results from a context analy sis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.