SOTAVerified

ZYN: Zero-Shot Reward Models with Yes-No Questions for RLAIF

2023-08-11Code Available1· sign in to hype

Victor Gallego

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we address the problem of directing the text generation of a language model (LM) towards a desired behavior, aligning the generated text with the preferences of the human operator. We propose using another, instruction-tuned language model as a critic reward model in a zero-shot way thanks to the prompt of a Yes-No question that represents the user preferences, without requiring further labeled data. This zero-shot reward model provides the learning signal to further fine-tune the base LM using Reinforcement Learning from AI Feedback (RLAIF); yet our approach is also compatible in other contexts such as quality-diversity search. Extensive evidence of the capabilities of the proposed ZYN framework is provided through experiments in different domains related to text generation, including detoxification; optimizing sentiment of movie reviews, or any other attribute; steering the opinion about a particular topic the model may have; and personalizing prompt generators for text-to-image tasks. Code available at https://github.com/vicgalle/zero-shot-reward-models/.

Tasks

Reproductions