SOTAVerified

Fine-tune the Entire RAG Architecture (including DPR retriever) for Question-Answering

2021-06-22Code Available0· sign in to hype

Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Suranga Nanayakkara

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we illustrate how to fine-tune the entire Retrieval Augment Generation (RAG) architecture in an end-to-end manner. We highlighted the main engineering challenges that needed to be addressed to achieve this objective. We also compare how end-to-end RAG architecture outperforms the original RAG architecture for the task of question answering. We have open-sourced our implementation in the HuggingFace Transformers library.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SQuADRAG-end2endExact Match40.02Unverified

Reproductions