SOTAVerified

Data-to-text Generation with Variational Sequential Planning

2022-02-28Code Available1· sign in to hype

Ratish Puduppully, Yao Fu, Mirella Lapata

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input. We focus on generating long-form text, i.e., documents with multiple paragraphs, and propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way. We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Experiments on two data-to-text benchmarks (RotoWire and MLB) show that our model outperforms strong baselines and is sample efficient in the face of limited training data (e.g., a few hundred instances).

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MLB DatasetSeqPlanBLEU14.29Unverified
MLB Dataset (Content Ordering)SeqPlanDLD22.7Unverified
MLB Dataset (Content Selection)SeqPlanPrecision43.3Unverified
MLB Dataset (Relation Generation)SeqPlanPrecision95.9Unverified
RotoWire (Relation Generation)SeqPlanPrecision97.6Unverified

Reproductions