MetaPrompting: Learning to Learn Better Prompts
Anonymous
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Prompting method is regarded as one of the crucial progress for few-shot nature language processing. Recent research on prompting moves from discrete tokens based "hard prompts" to continuous "soft prompts", which employ learnable vectors as pseudo prompts and achieve better performance. Though showing promising prospects, these soft-prompting methods are observed to rely heavily on good initialization to take effect. Unfortunately, obtaining a perfect initialization for soft prompt requires understanding of inner language models working and elaborate design, which is no easy task and has to restart from scratch for each new task. To remedy this, we propose a generalized soft prompting method called MetaPrompting, which adopts the well-recognized model-agnostic meta-learning algorithm to automatically find better prompt initialization that facilitates fast adaptation to new prompting tasks. Experiments show MetaPrompting brings significant improvements on three different datasets (over 6.5 points improvements for 1-shot setting), and achieves new state-of-the-art performance.