SOTAVerified

Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER

2019-10-07Unverified0· sign in to hype

Peng-Hsuan Li, Tsu-Jui Fu, Wei-Yun Ma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

State-of-the-art approaches of NER have used sequence-labeling BiLSTM as a core module. This paper formally shows the limitation of BiLSTM in modeling cross-context patterns. Two types of simple cross-structures -- self-attention and Cross-BiLSTM -- are shown to effectively remedy the problem. On both OntoNotes 5.0 and WNUT 2017, clear and consistent improvements are achieved over bare-bone models, up to 8.7% on some of the multi-token mentions. In-depth analyses across several aspects of the improvements, especially the identification of multi-token mentions, are further given.

Tasks

Reproductions