SOTAVerified

Accumulating Word Representations in Multi-level Context Integration for ERC Task

2023-11-06International Conference on Knowledge and Systems Engineering (KSE) 2023Code Available0· sign in to hype

Jieying Xue, Phuong Minh Nguyen, Matheny Blake, Nguyen Minh Le

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Emotion Recognition in Conversations (ERC) has attracted augmented interest recently because of its pronounced adaptability, which is to forecast the sentiment label for each utterance given a conversation as context. In order to identify the emotion of a focal sentence, it is crucial to model its meaning fused with contextual information. Many recent studies have focused on capturing different types of context as supporting information and integrated it in various ways: local and global contexts or at the speaker level through intra-speaker and inter-speaker integration. However, the importance of word representations after context integration has not been investigated completely, while word information is also essential to reflect the speaker's emotions in the conversation. Therefore, in this work, we endeavor to investigate the impact of accumulating word vector representations on sentence modeling fused with multi-level contextual integration. To this end, we propose an effective method for sentence modeling in ERC tasks and achieve competitive state-of-the-art results across four widely recognized bench-mark datasets: Iemocap, MELD, EmoryNLP, and DailyDialog. Our source code can be accessed via the following link: github.com/yingjie7/per_erc/tree/AccumWR.

Tasks

Reproductions