SOTAVerified

A Decomposable Attention Model for Natural Language Inference

2016-06-06EMNLP 2016Code Available1· sign in to hype

Ankur P. Parikh, Oscar Täckström, Dipanjan Das, Jakob Uszkoreit

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SNLI200D decomposable attention feed-forward model with intra-sentence attention% Test Accuracy86.8Unverified
SNLI200D decomposable attention model with intra-sentence attention% Test Accuracy86.8Unverified
SNLI200D decomposable attention feed-forward model% Test Accuracy86.3Unverified
SNLI200D decomposable attention model% Test Accuracy86.3Unverified

Reproductions