SOTAVerified

End-to-end Neural Coreference Resolution

2017-07-21EMNLP 2017Code Available0· sign in to hype

Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CoNLL-2012e2e-coref + ELMoAvg F170.4Unverified
CoNLL-2012e2e-coref (ensemble)Avg F168.8Unverified
CoNLL-2012e2e-coref (single)Avg F167.2Unverified
OntoNotese2e-corefF167.2Unverified

Reproductions