PatchBERT: Just-in-Time, Out-of-Vocabulary Patching
2020-11-01EMNLP 2020Unverified0· sign in to hype
Sangwhan Moon, Naoaki Okazaki
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Large scale pre-trained language models have shown groundbreaking performance improvements for transfer learning in the domain of natural language processing. In our paper, we study a pre-trained multilingual BERT model and analyze the OOV rate on downstream tasks, how it introduces information loss, and as a side-effect, obstructs the potential of the underlying model. We then propose multiple approaches for mitigation and demonstrate that it improves performance with the same parameter count when combined with fine-tuning.