SOTAVerified

Data-to-text Generation with Macro Planning

2021-02-04Code Available1· sign in to hype

Ratish Puduppully, Mirella Lapata

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text which is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans represent high level organization of important content such as entities, events and their interactions; they are learnt from data and given as input to the generator. Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show that our approach outperforms competitive baselines in terms of automatic and human evaluation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MLB DatasetMacroBLEU12.62Unverified
MLB Dataset (Content Ordering)MacroDLD21.8Unverified
MLB Dataset (Content Ordering)ENTDLD20.7Unverified
MLB Dataset (Content Selection)MacroPrecision40.8Unverified
MLB Dataset (Relation Generation)ENTPrecision81.1Unverified
MLB Dataset (Relation Generation)MacroPrecision94.4Unverified
RotoWireMacroBLEU15.46Unverified
RotoWire (Content Ordering)MacroDLD17.7Unverified
Rotowire (Content Selection)MacroPrecision34.1Unverified
RotoWire (Relation Generation)MacroPrecision97.6Unverified

Reproductions