SOTAVerified

Control Prefixes for Parameter-Efficient Text Generation

2021-10-15Code Available1· sign in to hype

Jordan Clive, Kris Cao, Marek Rei

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to a downstream application. However, it uses the same dataset-level tuned prompt for all examples in the dataset. We extend this idea and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information, combining the benefits of prompt tuning and controlled generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). Although the aim is to develop a parameter-efficient model, we show Control Prefixes can even outperform full fine-tuning methods. We present state-of-the-art results on several data-to-text datasets, including WebNLG.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cleaned E2E NLG ChallengeControl Prefixes (T5-large)BLEU (Test set)44.15Unverified
WebNLGControl Prefixes (A1, T5-large)BLEU67.32Unverified
WebNLGControl Prefixes (A1, A2, T5-large)BLEU67.15Unverified
WebNLG FullControl Prefixes (A1, T5-large)BLEU61.94Unverified
WebNLG FullControl Prefixes (A1, A2, T5-large)BLEU62.27Unverified

Reproductions