SOTAVerified

Encoding and Fusing Semantic Connection and Linguistic Evidence for Implicit Discourse Relation Recognition

2022-05-01Findings (ACL) 2022Code Available0· sign in to hype

Wei Xiang, Bang Wang, Lu Dai, Yijun Mo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prior studies use one attention mechanism to improve contextual semantic representation learning for implicit discourse relation recognition (IDRR). However, diverse relation senses may benefit from different attention mechanisms. We also argue that some linguistic relation in between two words can be further exploited for IDRR. This paper proposes a Multi-Attentive Neural Fusion (MANF) model to encode and fuse both semantic connection and linguistic evidence for IDRR. In MANF, we design a Dual Attention Network (DAN) to learn and fuse two kinds of attentive representation for arguments as its semantic connection. We also propose an Offset Matrix Network (OMN) to encode the linguistic relations of word-pairs as linguistic evidence. Our MANF model achieves the state-of-the-art results on the PDTB 3.0 corpus.

Tasks

Reproductions