Learning to Generalize: Meta-Learning for Domain Generalization
Da Li, Yongxin Yang, Yi-Zhe Song, Timothy M. Hospedales
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/thuml/Transfer-Learning-Librarypytorch★ 3,889
- github.com/facebookresearch/DomainBedpytorch★ 1,604
- github.com/HAHA-DL/MLDGpytorch★ 151
- github.com/Pulkit-Khandelwal/medical-mldg-segpytorch★ 33
- github.com/Pulkit-Khandelwal/mldgpytorch★ 0
Abstract
Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| PACS | MLDG (Alexnet) | Average Accuracy | 70.01 | — | Unverified |