SOTAVerified

Embracing Ambiguity: Shifting the Training Target of NLI Models

2021-06-06ACL 2021Code Available0· sign in to hype

Johannes Mario Meissner, Napat Thumwanit, Saku Sugawara, Akiko Aizawa

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels. While many research works do not pay much attention to this fact, several recent efforts have been made to acknowledge and embrace the existence of ambiguity, such as UNLI and ChaosNLI. In this paper, we explore the option of training directly on the estimated label distribution of the annotators in the NLI task, using a learning loss based on this ambiguity distribution instead of the gold-labels. We prepare AmbiNLI, a trial dataset obtained from readily available sources, and show it is possible to reduce ChaosNLI divergence scores when finetuning on this data, a promising first step towards learning how to capture linguistic ambiguity. Additionally, we show that training on the same amount of data but targeting the ambiguity distribution instead of gold-labels can result in models that achieve higher performance and learn better representations for downstream tasks.

Tasks

Reproductions