SOTAVerified

Discrepancy-Optimal Meta-Learning for Domain Generalization

2021-09-29Unverified0· sign in to hype

Chen Jia

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This work attempts to tackle the problem of domain generalization (DG) via learning to reduce domain shift with an episodic training procedure. In particular, we measure the domain shift with Y-discrepancy and learn to optimize Y-discrepancy between the unseen target domain and source domains only using source-domain samples. Theoretically, we give a PAC-style generalization bound for discrepancy-optimal meta-learning and further make comparisons with other DG bounds including ERM and domain-invariant learning. The theoretical analyses show that there is a tradeoff between classification performance and computational complexity for discrepancy-optimal meta-learning. The theoretical results also shed light on a bilevel optimization algorithm for DG. Empirically, we evaluate the algorithm with DomainBed and achieves state-of-the-art results on two DG benchmarks.

Tasks

Reproductions