SOTAVerified

When is invariance useful in an Out-of-Distribution Generalization problem ?

2020-08-04Code Available0· sign in to hype

Masanori Koyama, Shoichiro Yamaguchi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The goal of Out-of-Distribution (OOD) generalization problem is to train a predictor that generalizes on all environments. Popular approaches in this field use the hypothesis that such a predictor shall be an invariant predictor that captures the mechanism that remains constant across environments. While these approaches have been experimentally successful in various case studies, there is still much room for the theoretical validation of this hypothesis. This paper presents a new set of theoretical conditions necessary for an invariant predictor to achieve the OOD optimality. Our theory not only applies to non-linear cases, but also generalizes the necessary condition used in rojas2018invariant. We also derive Inter Gradient Alignment algorithm from our theory and demonstrate its competitiveness on MNIST-derived benchmark datasets as well as on two of the three Invariance Unit Tests proposed by aubinlinear.

Tasks

Reproductions