DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/UCBAIR/decaf-releasenone★ 0
- github.com/jetpacapp/DeepBeliefSDKnone★ 0
- github.com/starfoe/Eye-bnbtf★ 0
- github.com/Aniket7/Transfer-Learningnone★ 0
- github.com/weiwang2330/siamesenone★ 0
- github.com/umangkeshri/JBM-classifying-defected-partstf★ 0
- github.com/Kadenze/siamese_netnone★ 0
- github.com/UCB-ICSI-Vision-Group/decaf-releasenone★ 0
Abstract
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.