SOTAVerified

How do QA models combine knowledge from LM and 100 passages?

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Retrieval-based generation models achieve high accuracy in open retrieval question answering by assessing rich knowledge sources --- multiple retrieved passages and parametric knowledge in the language model (LM). Yet, little is known about how they blend information stored in their LM parameters with that from retrieved evidence documents. We study this by simulating knowledge conflicts (i.e., where parametric knowledge suggests one answer and different passages suggest different answers). We find that retrieval performance largely decides which knowledge source models use, and a state-of-the-art model barely relies on parametric knowledge when given multiple passages. When presented with passages suggesting multiple answers, however, models use parametric knowledge to break the ties. We discover a troubling trend that contradictions in diverse knowledge sources affect model confidence only marginally. Together, our study helps interpreting answers from these models and suggests directions for future work.

Tasks

Reproductions