SOTAVerified

An Empirical Study of Invariant Risk Minimization

2020-04-10Code Available1· sign in to hype

Yo Joong Choe, Jiyeon Ham, Kyubyong Park

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Invariant risk minimization (IRM) (Arjovsky et al., 2019) is a recently proposed framework designed for learning predictors that are invariant to spurious correlations across different training environments. Yet, despite its theoretical justifications, IRM has not been extensively tested across various settings. In an attempt to gain a better understanding of the framework, we empirically investigate several research questions using IRMv1, which is the first practical algorithm proposed to approximately solve IRM. By extending the ColoredMNIST experiment in different ways, we find that IRMv1 (i) performs better as the spurious correlation varies more widely between training environments, (ii) learns an approximately invariant predictor when the underlying relationship is approximately invariant, and (iii) can be extended to an analogous setting for text classification.

Tasks

Reproductions