SOTAVerified

Neural Architectures for Nested NER through Linearization

2019-08-19ACL 2019Code Available0· sign in to hype

Jana Straková, Milan Straka, Jan Hajič

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose two neural network architectures for nested named entity recognition (NER), a setting in which named entities may overlap and also be labeled with more than one label. We encode the nested labels using a linearized scheme. In our first proposed approach, the nested labels are modeled as multilabels corresponding to the Cartesian product of the nested labels in a standard LSTM-CRF architecture. In the second one, the nested NER is viewed as a sequence-to-sequence problem, in which the input sequence consists of the tokens and output sequence of the labels, using hard attention on the word whose label is being predicted. The proposed methods outperform the nested NER state of the art on four corpora: ACE-2004, ACE-2005, GENIA and Czech CNEC. We also enrich our architectures with the recently published contextual embeddings: ELMo, BERT and Flair, reaching further improvements for the four nested entity corpora. In addition, we report flat NER state-of-the-art results for CoNLL-2002 Dutch and Spanish and for CoNLL-2003 English.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ACE 2004seq2seq+BERT+FlairF184.4Unverified
ACE 2005seq2seq+BERT+FlairF184.33Unverified
CoNLL 2002 (Dutch)Straková et al., 2019F192.7Unverified
CoNLL 2002 (Spanish)Straková et al., 2019F188.8Unverified
CoNLL 2003 (English)LSTM-CRF+ELMo+BERT+FlairF193.38Unverified
CoNLL 2003 (German)Straková et al., 2019F185.1Unverified
GENIAseq2seq+BERT+FlairF178.31Unverified

Reproductions