SOTAVerified

Continual Learning via Explicit Structure Learning

2019-05-01ICLR 2019Unverified0· sign in to hype

Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, Caiming Xiong

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Despite recent advances in deep learning, neural networks suffer catastrophic forgetting when tasks are learned sequentially. We propose a conceptually simple and general framework for continual learning, where structure optimization is considered explicitly during learning. We implement this idea by separating the structure and parameter learning. During structure learning, the model optimizes for the best structure for the current task. The model learns when to reuse or modify structure from previous tasks, or create new ones when necessary. The model parameters are then estimated with the optimal structure. Empirically, we found that our approach leads to sensible structures when learning multiple tasks continuously. Additionally, catastrophic forgetting is also largely alleviated from explicit learning of structures. Our method also outperforms all other baselines on the permuted MNIST and split CIFAR datasets in continual learning setting.

Tasks

Reproductions