SOTAVerified

Fast Reading Comprehension with ConvNets

2017-11-12ICLR 2018Code Available0· sign in to hype

Felix Wu, Ni Lao, John Blitzer, Guandao Yang, Kilian Weinberger

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

State-of-the-art deep reading comprehension models are dominated by recurrent neural nets. Their sequential nature is a natural fit for language, but it also precludes parallelization within an instances and often becomes the bottleneck for deploying such models to latency critical scenarios. This is particularly problematic for longer texts. Here we present a convolutional architecture as an alternative to these recurrent architectures. Using simple dilated convolutional units in place of recurrent ones, we achieve results comparable to the state of the art on two question answering tasks, while at the same time achieving up to two orders of magnitude speedups for question answering.

Tasks

Reproductions