SOTAVerified

TAG: Task-based Accumulated Gradients for Lifelong learning

2021-05-11Code Available0· sign in to hype

Pranshu Malviya, Balaraman Ravindran, Sarath Chandar

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

When an agent encounters a continual stream of new tasks in the lifelong learning setting, it leverages the knowledge it gained from the earlier tasks to help learn the new tasks better. In such a scenario, identifying an efficient knowledge representation becomes a challenging problem. Most research works propose to either store a subset of examples from the past tasks in a replay buffer, dedicate a separate set of parameters to each task or penalize excessive updates over parameters by introducing a regularization term. While existing methods employ the general task-agnostic stochastic gradient descent update rule, we propose a task-aware optimizer that adapts the learning rate based on the relatedness among tasks. We utilize the directions taken by the parameters during the updates by accumulating the gradients specific to each task. These task-based accumulated gradients act as a knowledge base that is maintained and updated throughout the stream. We empirically show that our proposed adaptive learning rate not only accounts for catastrophic forgetting but also allows positive backward transfer. We also show that our method performs better than several state-of-the-art methods in lifelong learning on complex datasets with a large number of tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
5-dataset - 1 epochTAG-RMSPropAccuracy62.59Unverified
Cifar100 (20 tasks) - 1 epochTAG-RMSPropAverage Accuracy62.79Unverified
CUB-200-2011 (20 tasks) - 1 epochTAG-RMSPropAccuracy61.58Unverified
mini-Imagenet (20 tasks) - 1 epochTAG-RMSPropAccuracy57.2Unverified

Reproductions