SOTAVerified

On Generalization in Coreference Resolution

2021-09-20CRAC (ACL) 2021Code Available1· sign in to hype

Shubham Toshniwal, Patrick Xia, Sam Wiseman, Karen Livescu, Kevin Gimpel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While coreference resolution is defined independently of dataset domain, most models for performing coreference resolution do not transfer well to unseen domains. We consolidate a set of 8 coreference resolution datasets targeting different domains to evaluate the off-the-shelf performance of models. We then mix three datasets for training; even though their domain, annotation guidelines, and metadata differ, we propose a method for jointly training a single model on this heterogeneous data mixture by using data augmentation to account for annotation differences and sampling to balance the data quantities. We find that in a zero-shot setting, models trained on a single dataset transfer poorly while joint training yields improved overall performance, leading to better generalization in coreference resolution models. This work contributes a new benchmark for robust coreference resolution and multiple new state-of-the-art results.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LitBanklongdoc S (OntoNotes + PreCo + LitBank)F178.2Unverified
OntoNoteslongdoc S (ON + PreCo + LitBank + 30k pseudo-singletons)F179.6Unverified
OntoNoteslongdoc S (OntoNotes + PreCo + LitBank)F179.2Unverified
OntoNoteslongdoc S (OntoNotes + 60k pseudo-singletons)F180.6Unverified
PreColongdoc S (OntoNotes + PreCo + LitBank)F187.6Unverified
Quizbowllongdoc S (OntoNotes + PreCo + LitBank)F142.9Unverified
WikiCoreflongdoc S (ON + PreCo + LitBank + 30k pseudo-singletons)F162.5Unverified
WikiCoreflongdoc S (OntoNotes + PreCo + LitBank)F160.3Unverified
Winograd Schema Challengelongdoc S (ON + PreCo + LitBank + 30k pseudo-singletons)Accuracy59.4Unverified
Winograd Schema Challengelongdoc S (OntoNotes + PreCo + LitBank)Accuracy60.1Unverified

Reproductions