SOTAVerified

Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping

2021-02-22Code Available0· sign in to hype

Prakhar Kaushik, Alex Gain, Adam Kortylewski, Alan Yuille

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Catastrophic forgetting in neural networks is a significant problem for continual learning. A majority of the current methods replay previous data during training, which violates the constraints of an ideal continual learning system. Additionally, current approaches that deal with forgetting ignore the problem of catastrophic remembering, i.e. the worsening ability to discriminate between data from different tasks. In our work, we introduce Relevance Mapping Networks (RMNs) which are inspired by the Optimal Overlap Hypothesis. The mappings reflects the relevance of the weights for the task at hand by assigning large weights to essential parameters. We show that RMNs learn an optimized representational overlap that overcomes the twin problem of catastrophic forgetting and remembering. Our approach achieves state-of-the-art performance across all common continual learning datasets, even significantly outperforming data replay methods while not violating the constraints for an ideal continual learning system. Moreover, RMNs retain the ability to detect data from new tasks in an unsupervised manner, thus proving their resilience against catastrophic remembering.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cifar100 (10 tasks)RMN (Resnet)Average Accuracy84.9Unverified
Cifar100 (20 tasks)RMNAverage Accuracy81Unverified
ImageNet-50 (5 tasks)RMNAccuracy68.1Unverified
Permuted MNISTRMNAverage Accuracy97.99Unverified

Reproductions