Self-ensembling for visual domain adaptation
Geoffrey French, Michal Mackiewicz, Mark Fisher
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Britefury/self-ensemble-visual-domain-adaptOfficialIn paperpytorch★ 0
- github.com/thuml/Transfer-Learning-Librarypytorch★ 3,889
- github.com/domainadaptation/saladpytorch★ 339
Abstract
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et al., 2017) of temporal ensembling (Laine et al;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MNIST-to-USPS | Mean teacher | Accuracy | 98.26 | — | Unverified |
| SVHN-to-MNIST | Mean teacher | Accuracy | 99.18 | — | Unverified |
| Synth Signs-to-GTSRB | Mean teacher | Accuracy | 98.66 | — | Unverified |
| USPS-to-MNIST | Mean teacher | Accuracy | 98.07 | — | Unverified |
| VisDA2017 | Mean teacher | Accuracy | 85.4 | — | Unverified |