SOTAVerified

Deep Domain-Adversarial Image Generation for Domain Generalisation

2020-03-12Unverified0· sign in to hype

Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on Deep Domain-Adversarial Image Generation (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PACSDDAIG (Resnet-18)Average Accuracy83.1Unverified

Reproductions