SOTAVerified

Refusal Behavior in Large Language Models: A Nonlinear Perspective

2025-01-14Code Available0· sign in to hype

Fabian Hildebrandt, Andreas Maier, Patrick Krauss, Achim Schilling

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Refusal behavior in large language models (LLMs) enables them to decline responding to harmful, unethical, or inappropriate prompts, ensuring alignment with ethical standards. This paper investigates refusal behavior across six LLMs from three architectural families. We challenge the assumption of refusal as a linear phenomenon by employing dimensionality reduction techniques, including PCA, t-SNE, and UMAP. Our results reveal that refusal mechanisms exhibit nonlinear, multidimensional characteristics that vary by model architecture and layer. These findings highlight the need for nonlinear interpretability to improve alignment research and inform safer AI deployment strategies.

Tasks

Reproductions