SOTAVerified

Text-to-Text Pre-Training for Data-to-Text Tasks

2020-05-21INLG (ACL) 2020Code Available1· sign in to hype

Mihir Kale, Abhinav Rastogi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5, enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored for data-to-text generation, as well as alternative language model based pre-training techniques such as BERT and GPT-2. Importantly, T5 pre-training leads to better generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as transfer learning becomes ever more prevalent for data-to-text tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MULTIWOZ 2.1T5-BaseBLEU35.1Unverified
ToTToT5-3BBLEU49.5Unverified
WebNLGT5-BaseBLEU64.7Unverified
WebNLG FullT5-LargeBLEU57.1Unverified

Reproductions