SOTAVerified

Transformer-based Language Models for Factoid Question Answering at BioASQ9b

2021-09-15Code Available0· sign in to hype

Urvashi Khanna, Diego Mollá

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we describe our experiments and participating systems in the BioASQ Task 9b Phase B challenge of biomedical question answering. We have focused on finding the ideal answers and investigated multi-task fine-tuning and gradual unfreezing techniques on transformer-based language models. For factoid questions, our ALBERT-based systems ranked first in test batch 1 and fourth in test batch 2. Our DistilBERT systems outperformed the ALBERT variants in test batches 4 and 5 despite having 81% fewer parameters than ALBERT. However, we observed that gradual unfreezing had no significant impact on the model's accuracy compared to standard fine-tuning.

Tasks

Reproductions