SOTAVerified

Progressive Neural Networks

2016-06-15Code Available1· sign in to hype

Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CUBS (Fine-grained 6 Tasks)ProgressiveNetAccuracy78.94Unverified
Flowers (Fine-grained 6 Tasks)ProgressiveNetAccuracy93.41Unverified
ImageNet (Fine-grained 6 Tasks)ProgressiveNetAccuracy76.16Unverified
Sketch (Fine-grained 6 Tasks)ProgressiveNetAccuracy76.35Unverified
Stanford Cars (Fine-grained 6 Tasks)ProgressiveNetAccuracy89.21Unverified
Wikiart (Fine-grained 6 Tasks)ProgressiveNetAccuracy74.94Unverified

Reproductions