SOTAVerified

Stable Natural Language Understanding via Invariant Causal Constraint

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Natural Language Understanding (NLU) task requires the model to understand the underlying semantics of input text. However, recent analyses demonstrate that NLU models tend to utilize dataset biases to achieve high dataset-specific performances, which always leads to performance degradation on out-of-distribution (OOD) samples. To increase the performance stability, previous debiasing methods empirically capture bias features from data to prevent model from corresponding biases. However, we argue that, the semantic information can form a causal relationship with the target labels of the NLU task, while the biases information is only correlative to the target labels. Such difference between the semantic information and dataset biases still remains not fully addressed, which limits the effectiveness of debiasing. To address this issue, we analyze the debiasing process under a causal perspective, and present a causal invariance based stable NLU framework (CI-sNLU).

Tasks

Reproductions