When to Fold'em: How to answer Unanswerable questions
2021-05-01Code Available1· sign in to hype
Marshall Ho, Zhipeng Zhou, Judith He
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/allenai/document-qaOfficialIn papertf★ 437
Abstract
We present 3 different question-answering models trained on the SQuAD2.0 dataset -- BIDAF, DocumentQA and ALBERT Retro-Reader -- demonstrating the improvement of language models in the past three years. Through our research in fine-tuning pre-trained models for question-answering, we developed a novel approach capable of achieving a 2% point improvement in SQuAD2.0 F1 in reduced training time. Our method of re-initializing select layers of a parameter-shared language model is simple yet empirically powerful.