SOTAVerified

QUASER: Question Answering with Scalable Extractive Rationalization

2021-05-16ACL ARR May 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Designing NLP models that produce predictions by first extracting a set of relevant input sentences (i.e., rationales), is gaining importance as a means to improving model interpretability and to producing supporting evidence for users. Current unsupervised approaches are trained to extract rationales that maximize prediction accuracy, which is invariably obtained by exploiting spurious correlations in datasets, and leads to unconvincing rationales. In this paper, we introduce unsupervised generative models to extract dual-purpose rationales, which must not only be able to support a subsequent answer prediction, but also support a reproduction of the input query. We show that such models can produce more meaningful rationales, that are less influenced by dataset artifacts, and as a result, also achieve the state-of-the-art on rationale extraction metrics on four datasets from the ERASER benchmark, significantly improving upon previous unsupervised methods.

Tasks

Reproductions