SOTAVerified

Challenges in Data-to-Document Generation

2017-07-25EMNLP 2017Code Available0· sign in to hype

Sam Wiseman, Stuart M. Shieber, Alexander M. Rush

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate human-generated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy- and reconstruction-based extensions lead to noticeable improvements.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
RotoWireEncoder-decoder + conditional copyBLEU14.19Unverified
RotoWire (Content Ordering)Encoder-decoder + conditional copyDLD8.68Unverified
Rotowire (Content Selection)Encoder-decoder + conditional copyPrecision29.49Unverified
RotoWire (Relation Generation)Encoder-decoder + conditional copyPrecision74.8Unverified

Reproductions