SOTAVerified

Towards a General Framework for Continual Learning with Pre-training

2023-10-21Code Available1· sign in to hype

Liyuan Wang, Jingyi Xie, Xingxing Zhang, Hang Su, Jun Zhu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we present a general framework for continual learning of sequentially arrived tasks with the use of pre-training, which has emerged as a promising direction for artificial intelligence systems to accommodate real-world dynamics. From a theoretical perspective, we decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction. Then we propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics. We empirically demonstrate the superiority and generality of our approach in downstream continual learning, and further explore the applicability of PEFT techniques in upstream continual learning. We also discuss the biological basis of the proposed framework with recent advances in neuroscience.

Tasks

Reproductions