SOTAVerified

Conditional Input Gated Low-Rank Perturbations for Continual Learning

2020-11-23pproximateinference AABI Symposium 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We address the problem of learning convolution neural networks (CNN) in the continual setting when tasks arrive sequentially, and only the data of the current task is available. In this setting CNNs are prone to reduce their quality on all old task drastically. In this work, we extend the idea of the abati of data-conditional expansion of the CNN architecture. We propose to use low-rank and hence more weak additive perturbations for CNN filters, which is enough due to the composition structure of the CNN layers. Such low-rank adaptation modules allow us to reduce computational costs and promote sparseness in the adaptation of CNN to new tasks. We validate our approach empirically on the split MNIST and CIFAR10 tasks.

Tasks

Reproductions