SOTAVerified

ATPL: Mutually enhanced adversarial training and pseudo labeling for unsupervised domain adaptation

2022-08-17Knowledge-Based Systems 2022Unverified0· sign in to hype

Changan Yi, Haotian Chen, Yonghui Xu, Yong liu, Lei Jiang, Haishu Tan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to a related but unlabeled target domain. Most existing approaches either adversarially reduce the domain shift or use pseudo-labels to provide category information during adaptation. However, an adversarial training method may sacrifice the discriminability of the target data, since no category information is available. Moreover, a pseudo labeling method is difficult to produce high-confidence samples, since the classifier is often source-trained and there exists the domain discrepancy. Thus, it may have a negative influence on learning target representations. A potential solution is to make them compensate each other to simultaneously guarantee the feature transferability and discriminability, which are the two key criteria of feature representations in domain adaptation. In this paper, we propose a novel method named ATPL, which mutually promotes Adversarial Training and Pseudo Labeling for unsupervised domain adaptation. ATPL can produce high-confidence pseudo-labels by adversarial training. Accordingly, ATPL will use the pseudo-labeled information to improve the adversarial training process, which can guarantee the feature transferability by generating adversarial data to fill in the domain gap. Those pseudo-labels can also boost the feature discriminability. Extensive experiments on real datasets demonstrate that the proposed ATPL method outperforms state-of-the-art unsupervised domain adaptation methods.

Tasks

Reproductions