SOTAVerified

Soft Prompts for Evaluation: Measuring Conditional Distance of Capabilities

2025-05-20Code Available0· sign in to hype

Ross Nordby

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

To help evaluate and understand the latent capabilities of language models, this paper introduces an approach using optimized input embeddings, or 'soft prompts,' as a metric of conditional distance between a model and a target behavior. The technique aims to facilitate latent capability discovery as a part of automated red teaming/evaluation suites and to provide quantitative feedback about the accessibility of potentially concerning behaviors in a way that may scale to powerful future models, including those which may otherwise be capable of deceptive alignment. An evaluation framework using soft prompts is demonstrated in natural language, chess, and pathfinding, and the technique is extended with generalized conditional soft prompts to aid in constructing task evaluations.

Tasks

Reproductions