SOTAVerified

Controllable Meaning Representation to Text Generation: Linearization and Data Augmentation Strategies

2020-11-01EMNLP 2020Unverified0· sign in to hype

Chris Kedzie, Kathleen McKeown

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the degree to which neural sequence-to-sequence models exhibit fine-grained controllability when performing natural language generation from a meaning representation. Using two task-oriented dialogue generation benchmarks, we systematically compare the effect of four input linearization strategies on controllability and faithfulness. Additionally, we evaluate how a phrase-based data augmentation method can improve performance. We find that properly aligning input sequences during training leads to highly controllable generation, both when training from scratch or when fine-tuning a larger pre-trained model. Data augmentation further improves control on difficult, randomly generated utterance plans.

Tasks

Reproductions