Gradual Fine-Tuning for Low-Resource Domain Adaptation
2021-03-03EACL (AdaptNLP) 2021Code Available0· sign in to hype
Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/fe1ixxu/Gradual-FinetuneOfficialIn paperpytorch★ 6
- github.com/isi-boston/ed-poolingpytorch★ 2
Abstract
Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain. Such domain adaptation is typically done using one stage of fine-tuning. We demonstrate that gradually fine-tuning in a multi-stage process can yield substantial further gains and can be applied without modifying the model or learning objective.