SOTAVerified

Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation

2019-05-19Code Available0· sign in to hype

Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In unsupervised domain adaptation (UDA), classifiers for the target domain (TD) are trained with clean labeled data from the source domain (SD) and unlabeled data from TD. However, in the wild, it is difficult to acquire a large amount of perfectly clean labeled data in SD given limited budget. Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD -- we name it wildly UDA (WUDA). We show that WUDA ruins all UDA methods if taking no care of label noise in SD, and to this end, we propose a Butterfly framework, a powerful and efficient solution to WUDA. Butterfly maintains four deep networks simultaneously, where two take care of all adaptations (i.e., noisy-to-clean, labeled-to-unlabeled, and SD-to-TD-distributional) and then the other two can focus on classification in TD. As a consequence, Butterfly possesses all the conceptually necessary components for solving WUDA. Experiments demonstrate that, under WUDA, Butterfly significantly outperforms existing baseline methods.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Noisy-Amazon (20%)ButterflyAverage Accuracy71.53Unverified
Noisy-Amazon (45%)ButterflyAverage Accuracy56.01Unverified
Noisy-MNIST-to-SYNDButterflyAverage Accuracy57.55Unverified
Noisy-SYND-to-MNISTButterflyAverage Accuracy94.09Unverified

Reproductions