SOTAVerified

In Search of Lost Domain Generalization

2020-07-02ICLR 2021Code Available1· sign in to hype

Ishaan Gulrajani, David Lopez-Paz

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist, inconsistencies in experimental conditions -- datasets, architectures, and model selection criteria -- render fair and realistic comparisons difficult. In this paper, we are interested in understanding how useful domain generalization algorithms are in realistic settings. As a first step, we realize that model selection is non-trivial for domain generalization tasks. Contrary to prior work, we argue that domain generalization algorithms without a model selection strategy should be regarded as incomplete. Next, we implement DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria. We conduct extensive experiments using DomainBed and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets. Looking forward, we hope that the release of DomainBed, along with contributions from fellow researchers, will streamline reproducible and rigorous research in domain generalization.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PACSERM (Resnet-50, DomainBed)Average Accuracy85.5Unverified

Reproductions