SOTAVerified

Eliciting Bias in Question Answering Models through Ambiguity

2021-11-01EMNLP (MRQA) 2021Code Available0· sign in to hype

Andrew Mao, Naveen Raman, Matthew Shu, Eric Li, Franklin Yang, Jordan Boyd-Graber

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Question answering (QA) models use retriever and reader systems to answer questions. Reliance on training data by QA systems can amplify or reflect inequity through their responses. Many QA models, such as those for the SQuAD dataset, are trained and tested on a subset of Wikipedia articles which encode their own biases and also reproduce real-world inequality. Understanding how training data affects bias in QA systems can inform methods to mitigate inequity. We develop two sets of questions for closed and open domain questions respectively, which use ambiguous questions to probe QA models for bias. We feed three deep-learning-based QA systems with our question sets and evaluate responses for bias via the metrics. Using our metrics, we find that open-domain QA models amplify biases more than their closed-domain counterparts and propose that biases in the retriever surface more readily due to greater freedom of choice.

Tasks

Reproductions