SOTAVerified

On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference

2019-07-09SEMEVAL 2019Code Available0· sign in to hype

Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Popular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases. Adversarial learning may help models ignore sensitive biases and spurious correlations in data. We evaluate whether adversarial learning can be used in NLI to encourage models to learn representations free of hypothesis-only biases. Our analyses indicate that the representations learned via adversarial learning may be less biased, with only small drops in NLI accuracy.

Tasks

Reproductions