Fox in the Henhouse: Supply-Chain Backdoor Attacks Against Reinforcement Learning
Shijie Liu, Andrew C. Cullen, Paul Montague, Sarah Erfani, Benjamin I. P. Rubinstein
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The current state-of-the-art backdoor attacks against Reinforcement Learning (RL) rely upon unrealistically permissive access models, that assume the attacker can read (or even write) the victim's policy parameters, observations, or rewards. In this work, we question whether such a strong assumption is required to launch backdoor attacks against RL. To answer this question, we propose the Supply-Chain Backdoor (SCAB) attack, which targets a common RL workflow: training agents using external agents that are provided separately or embedded within the environment. In contrast to prior works, our attack only relies on legitimate interactions of the RL agent with the supplied agents. Despite this limited access model, by poisoning a mere 3\% of training experiences, our attack can successfully activate over 90\% of triggered actions, reducing the average episodic return by 80\% for the victim. Our novel attack demonstrates that RL attacks are likely to become a reality under untrusted RL training supply-chains.