SOTAVerified

Learning Transferable Features with Deep Adaptation Networks

2015-02-10Code Available0· sign in to hype

Mingsheng Long, Yue Cao, Jian-Min Wang, Michael. I. Jordan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageCLEF-DADANAccuracy76.9Unverified
MNIST-to-MNIST-MMMD [tzeng2015ddc]; [long2015learning]Accuracy76.9Unverified
Office-CaltechDAN[[Long et al.2015]]Average Accuracy90.1Unverified
SVNH-to-MNISTMMD [tzeng2015ddc]; [long2015learning]Accuracy71.1Unverified
SYNSIG-to-GTSRBDANAccuracy91.1Unverified
Synth Digits-to-SVHNMMD [tzeng2015ddc]; [long2015learning]Accuracy88Unverified
Synth Signs-to-GTSRBMMD [tzeng2015ddc]; [long2015learning]Accuracy91.1Unverified

Reproductions