Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, Noah A. Smith
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/allenai/dont-stop-pretrainingOfficialIn paperpytorch★ 540
- github.com/shizhediao/black-box-prompt-learningpytorch★ 56
- github.com/shizhediao/t-dnapytorch★ 19
- github.com/SUSTechBruce/G-MAPpytorch★ 9
- github.com/bionlu-coling2024/biomed-ner-intent_detectionpytorch★ 8
- github.com/sagjounkani/Dont-Stop-Pretraining-Use-Adapters-Insteadpytorch★ 1
Abstract
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Moreover, adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multi-phase adaptive pretraining offers large gains in task performance.