SOTAVerified

No Offense Taken: Eliciting Offensiveness from Language Models

2023-10-02Code Available0· sign in to hype

Anugya Srivastava, Rahul Ahuja, Rohith Mukku

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This work was completed in May 2022. For safe and reliable deployment of language models in the real world, testing needs to be robust. This robustness can be characterized by the difficulty and diversity of the test cases we evaluate these models on. Limitations in human-in-the-loop test case generation has prompted an advent of automated test case generation approaches. In particular, we focus on Red Teaming Language Models with Language Models by Perez et al.(2022). Our contributions include developing a pipeline for automated test case generation via red teaming that leverages publicly available smaller language models (LMs), experimenting with different target LMs and red classifiers, and generating a corpus of test cases that can help in eliciting offensive responses from widely deployed LMs and identifying their failure modes.

Tasks

Reproductions