SOTAVerified

Improving Natural Language Inference with a Pretrained Parser

2019-09-18Code Available0· sign in to hype

Deric Pang, Lucy H. Lin, Noah A. Smith

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a novel approach to incorporate syntax into natural language inference (NLI) models. Our method uses contextual token-level vector representations from a pretrained dependency parser. Like other contextual embedders, our method is broadly applicable to any neural model. We experiment with four strong NLI models (decomposable attention model, ESIM, BERT, and MT-DNN), and show consistent benefit to accuracy across three NLI benchmarks.

Tasks

Reproductions