SOTAVerified

Frosting Weights for Better Continual Training

2020-01-07Code Available0· sign in to hype

Xiaofeng Zhu, Feng Liu, Goce Trajcevski, Dingding Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Training a neural network model can be a lifelong learning process and is a computationally intensive one. A severe adverse effect that may occur in deep neural network models is that they can suffer from catastrophic forgetting during retraining on new data. To avoid such disruptions in the continuous learning, one appealing property is the additive nature of ensemble models. In this paper, we propose two generic ensemble approaches, gradient boosting and meta-learning, to solve the catastrophic forgetting problem in tuning pre-trained neural network models.

Tasks

Reproductions