Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li, Percy Liang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/XiangLi1999/PrefixTuningOfficialpytorch★ 962
- github.com/NVIDIA/FasterTransformerpytorch★ 6,400
- github.com/rmokady/clip_prefix_captionpytorch★ 1,414
- github.com/thudm/swissarmytransformerpytorch★ 1,116
- github.com/ga642381/SpeechPromptpytorch★ 101
- github.com/jordiclive/ControlPrefixespytorch★ 90
- github.com/ranggihwang/pregated_moepytorch★ 58
- github.com/teticio/llama-squadpytorch★ 53
- github.com/lostoxygen/llm-confidentialitypytorch★ 43
- github.com/hellokevin07/elastictrainertf★ 14
Abstract
Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.