Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/DPROfficialIn paperpytorch★ 1,864
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/deepset-ai/haystackpytorch★ 24,592
- github.com/texttron/tevatronjax★ 734
- github.com/luyug/GC-DPRpytorch★ 136
- github.com/DevSinghSachan/unsupervised-passage-rerankingpytorch★ 100
- github.com/AkariAsai/XORQApytorch★ 80
- github.com/oriram/spiderpytorch★ 54
- github.com/Hannibal046/nanoDPRpytorch★ 54
- github.com/efficientqa/retrieval-based-baselinestf★ 51
Abstract
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| NaturalQA | DPR | EM | 41.5 | — | Unverified |
| Natural Questions | DPR | EM | 41.5 | — | Unverified |
| TriviaQA | DPR | EM | 56.8 | — | Unverified |
| WebQuestions | DPR | EM | 42.4 | — | Unverified |