SOTAVerified

Neural Variational Inference for Text Processing

2015-11-19Code Available0· sign in to hype

Yishu Miao, Lei Yu, Phil Blunsom

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bag-of-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
QASentAttentive LSTMMAP0.73Unverified
QASentLSTM (lexical overlap + dist output)MAP0.72Unverified
QASentLSTMMAP0.64Unverified
WikiQAAttentive LSTMMAP0.69Unverified
WikiQALSTM (lexical overlap + dist output)MAP0.68Unverified
WikiQALSTMMAP0.66Unverified

Reproductions