SOTAVerified

Learning to Generate Novel Domains for Domain Generalization

2020-07-07ECCV 2020Code Available1· sign in to hype

Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PACSL2A-OT (Resnet-18)Average Accuracy82.8Unverified

Reproductions