SOTAVerified

CALM: Continuous Adaptive Learning for Language Modeling

2020-04-08Unverified0· sign in to hype

Kristjan Arumae, Parminder Bhatia

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Training large language representation models has become a standard in the natural language processing community. This allows for fine tuning on any number of specific tasks, however, these large high capacity models can continue to train on domain specific unlabeled data to make initialization even more robust for supervised tasks. We demonstrate that in practice these pre-trained models present performance deterioration in the form of catastrophic forgetting when evaluated on tasks from a general domain such as GLUE. In this work we propose CALM, Continuous Adaptive Learning for Language Modeling: techniques to render models which retain knowledge across multiple domains. With these methods, we are able to reduce the performance gap across supervised tasks introduced by task specific models which we demonstrate using a continual learning setting in biomedical and clinical domains.

Tasks

Reproductions