SOTAVerified

What GPT Knows About Who is Who

2022-05-16insights (ACL) 2022Code Available0· sign in to hype

Xiaohan Yang, Eduardo Peynetti, Vasco Meerman, Chris Tanner

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Coreference resolution -- which is a crucial task for understanding discourse and language at large -- has yet to witness widespread benefits from large language models (LLMs). Moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. In this paper, we introduce a QA-based prompt-engineering method and discern generative, pre-trained LLMs' abilities and limitations toward the task of coreference resolution. Our experiments show that GPT-2 and GPT-Neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and prompt-sensitive, leading to inconsistent results.

Tasks

Reproductions