Learning to Generate Novel Domains for Domain Generalization
Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/mousecpn/L2A-OTpytorch★ 22
Abstract
This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| PACS | L2A-OT (Resnet-18) | Average Accuracy | 82.8 | — | Unverified |