SOTAVerified

Self-ensembling for visual domain adaptation

2017-06-16ICLR 2018Code Available1· sign in to hype

Geoffrey French, Michal Mackiewicz, Mark Fisher

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et al., 2017) of temporal ensembling (Laine et al;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MNIST-to-USPSMean teacherAccuracy98.26Unverified
SVHN-to-MNISTMean teacherAccuracy99.18Unverified
Synth Signs-to-GTSRBMean teacherAccuracy98.66Unverified
USPS-to-MNISTMean teacherAccuracy98.07Unverified
VisDA2017Mean teacherAccuracy85.4Unverified

Reproductions