On the Transferability of Representations in Neural Networks Between Datasets and Tasks
2018-11-29Unverified0· sign in to hype
Haytham M. Fayek, Lawrence Cavedon, Hong Ren Wu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Deep networks, composed of multiple layers of hierarchical distributed representations, tend to learn low-level features in initial layers and transition to high-level features towards final layers. Paradigms such as transfer learning, multi-task learning, and continual learning leverage this notion of generic hierarchical distributed representations to share knowledge across datasets and tasks. Herein, we study the layer-wise transferability of representations in deep networks across a few datasets and tasks and note some interesting empirical observations.