iCaRL: Incremental Classifier and Representation Learning
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph H. Lampert
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/srebuffi/iCaRLOfficialIn papertf★ 270
- github.com/ContinualAI/avalanchepytorch★ 2,041
- github.com/g-u-n/pycilpytorch★ 1,066
- github.com/aimagelab/mammothpytorch★ 793
- github.com/mmasana/FACILpytorch★ 563
- github.com/yaoyao-liu/mnemonicspytorch★ 473
- gitlab.com/viper-purdue/ocil-real-world-food-image-classificationpytorch★ 0
- github.com/DRSAD/iCaRLpytorch★ 0
- github.com/donlee90/icarlpytorch★ 0
- github.com/haseebs/Pseudo-rehearsal-Incremental-Learningpytorch★ 0
Abstract
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| cifar100 | iCaRL | 10-stage average accuracy | 63.24 | — | Unverified |