SOTAVerified

More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning

2020-08-01ECCV 2020Code Available1· sign in to hype

Yu Liu, Sarah Parisot, Gregory Slabaugh, Xu Jia, Ales Leonardis, Tinne Tuytelaars

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning","Overcoming catastrophic forgetting in neural networks is a long-standing and core research objective for incremental learning. Notable studies have shown regularization strategies enable the network to remember previously acquired knowledge devoid of heavy forgetting. Since those regularization strategies are mostly associated with classifier outputs, we propose a MUlti-Classifier (MUC) incremental learning paradigm that integrates an ensemble of auxiliary classifiers to estimate more effective regularization constraints. Additionally, we extend two common methods, focusing on parameter and activation regularization, from the conventional single-classifier paradigm to MUC. Our classifier ensemble promotes regularizing network parameters or activations when moving to learn the next task. Under the setting of task-agnostic evaluation, our experimental results on CIFAR-100 and Tiny ImageNet incremental benchmarks show that our method outperforms other baselines. Specifically, MUC obtains 3%-5% accuracy boost and 4%-5% decline of forgetting ratio, compared with MAS and LwF. Our code is available at https://github.com/Liuy8/MUC.

Tasks

Reproductions