SOTAVerified

CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning

2024-11-21Code Available1· sign in to hype

Marco Paul E. Apolinario, Sakshi Choudhary, Kaushik Roy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Continual learning (CL) - the ability to progressively acquire and integrate new concepts - is essential to intelligent systems to adapt to dynamic environments. However, deep neural networks struggle with catastrophic forgetting (CF) when learning tasks sequentially, as training for new tasks often overwrites previously learned knowledge. To address this, recent approaches constrain updates to orthogonal subspaces using gradient projection, effectively preserving important gradient directions for previous tasks. While effective in reducing forgetting, these approaches inadvertently hinder forward knowledge transfer (FWT), particularly when tasks are highly correlated. In this work, we propose Conceptor-based gradient projection for Deep Continual Learning (CODE-CL), a novel method that leverages conceptor matrix representations, a form of regularized reconstruction, to adaptively handle highly correlated tasks. CODE-CL mitigates CF by projecting gradients onto pseudo-orthogonal subspaces of previous task feature spaces while simultaneously promoting FWT. It achieves this by learning a linear combination of shared basis directions, allowing efficient balance between stability and plasticity and transfer of knowledge between overlapping input feature representations. Extensive experiments on continual learning benchmarks validate CODE-CL's efficacy, demonstrating superior performance, reduced forgetting, and improved FWT as compared to state-of-the-art methods.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
5-DatasetsCODE-CLAverage Accuracy93.32Unverified
miniImageNetCODE-CLAverage Accuracy68.83Unverified
Permuted MNISTCODE-CLAverage Accuracy96.56Unverified
split CIFAR-100CODE-CLAverage Accuracy77.21Unverified

Reproductions