SOTAVerified

Sequence to Sequence Coreference Resolution

2020-12-01COLING (CRAC) 2020Code Available0· sign in to hype

Gorka Urbizu, Ander Soraluze, Olatz Arregi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Until recently, coreference resolution has been a critical task on the pipeline of any NLP task involving deep language understanding, such as machine translation, chatbots, summarization or sentiment analysis. However, nowadays, those end tasks are learned end-to-end by deep neural networks without adding any explicit knowledge about coreference. Thus, coreference resolution is used less in the training of other NLP tasks or trending pretrained language models. In this paper we present a new approach to face coreference resolution as a sequence to sequence task based on the Transformer architecture. This approach is simple and universal, compatible with any language or dataset (regardless of singletons) and easier to integrate with current language models architectures. We test it on the ARRAU corpus, where we get 65.6 F1 CoNLL. We see this approach not as a final goal, but a means to pretrain sequence to sequence language models (T5) on coreference resolution.

Tasks

Reproductions