SOTAVerified

Continuous Prompt Generation from Linear Combination of Discrete Prompt Embeddings

2023-12-16Code Available0· sign in to hype

Pascal Passigan, Kidus Yohannes, Joshua Pereira

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The wayward quality of continuous prompts stresses the importance of their interpretability as unexpected and unpredictable behaviors appear following training, especially in the context of large language models automating people-sensitive tasks such as resume screening. In this paper we present a novel method of constructing continuous prompts via discrete prompt embeddings and evaluate improvements to continuous prompt interpretability and inference accuracy. For a set of manually designed discrete prompts D, which we tokenize and embed each into tensor form, we train a model to predict the weights such that the linear combinations of those prompts correspond to higher performance on natural language understanding tasks.

Tasks

Reproductions