SOTAVerified

Black Box Adversarial Prompting for Foundation Models

2023-02-08Code Available1· sign in to hype

Natalie Maus, Patrick Chao, Eric Wong, Jacob Gardner

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prompting interfaces allow users to quickly adjust the output of generative models in both vision and language. However, small changes and design choices in the prompt can lead to significant differences in the output. In this work, we develop a black-box framework for generating adversarial prompts for unstructured image and text generation. These prompts, which can be standalone or prepended to benign prompts, induce specific behaviors into the generative process, such as generating images of a particular object or generating high perplexity text.

Tasks

Reproductions