SOTAVerified

Invariant Rationalization

2020-03-22ICML 2020Code Available1· sign in to hype

Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Selective rationalization improves neural network interpretability by identifying a small subset of input features -- the rationale -- that best explains or supports the prediction. A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale. However, MMI can be problematic because it picks up spurious correlations between the input features and the output. Instead, we introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments. We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments. Our data and code are available.

Reproductions