SOTAVerified

DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks

2019-01-26ICLR 2019Code Available0· sign in to hype

Xingjian Li, Haoyi Xiong, Hanchao Wang, Yuxuan Rao, Li-Ping Liu, Zeyu Chen, Jun Huan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP. The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.

Tasks

Reproductions