SOTAVerified

Fine-Tuning and Retrieval Augmented Generation for Question Answering Using Affordable Large Language Models

2024-05-01LREC-Coling 2024Code Available0· sign in to hype

Tiberiu Boroş, Radu Chivereanu, Stefan Dumitrescu, Octavian Purcaru

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present our proposed system named Sherlock to UNLP 2024 Shared Task on Question Answering winning first place. We employ a mix of methods, from using automatically translated datasets to perform supervised fine-tuning and direct preference optimization on instruction-tuned models, to model weight merging and retrieval augmented generation. We present and motivate our chosen sequence of steps, as well as an ablation study to understand the effect of each additional step. The resulting model and code are made publicly available (download links provided in the paper).

Tasks

Reproductions