SOTAVerified

Entropy-based Stability-Plasticity for Lifelong Learning

2022-04-18Code Available0· sign in to hype

Vladimir Araujo, Julio Hurtado, Alvaro Soto, Marie-Francine Moens

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The ability to continuously learn remains elusive for deep learning models. Unlike humans, models cannot accumulate knowledge in their weights when learning new tasks, mainly due to an excess of plasticity and the low incentive to reuse weights when training a new task. To address the stability-plasticity dilemma in neural networks, we propose a novel method called Entropy-based Stability-Plasticity (ESP). Our approach can decide dynamically how much each model layer should be modified via a plasticity factor. We incorporate branch layers and an entropy-based criterion into the model to find such factor. Our experiments in the domains of natural language and vision show the effectiveness of our approach in leveraging prior knowledge by reducing interference. Also, in some cases, it is possible to freeze layers during training leading to speed up in training.

Tasks

Reproductions