SOTAVerified

Strategy Masking: A Method for Guardrails in Value-based Reinforcement Learning Agents

2025-01-09Unverified0· sign in to hype

Jonathan Keane, Sam Keyser, Jeremy Kedziora

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The use of reward functions to structure AI learning and decision making is core to the current reinforcement learning paradigm; however, without careful design of reward functions, agents can learn to solve problems in ways that may be considered "undesirable" or "unethical." Without thorough understanding of the incentives a reward function creates, it can be difficult to impose principled yet general control mechanisms over its behavior. In this paper, we study methods for constructing guardrails for AI agents that use reward functions to learn decision making. We introduce a novel approach, which we call strategy masking, to explicitly learn and then suppress undesirable AI agent behavior. We apply our method to study lying in AI agents and show that it can be used to effectively modify agent behavior by suppressing lying post-training without compromising agent ability to perform effectively.

Tasks

Reproductions