SOTAVerified

Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers

2021-06-14Unverified0· sign in to hype

Chace Ashcraft, Kiran Karra

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose a new data poisoning attack and apply it to deep reinforcement learning agents. Our attack centers on what we call in-distribution triggers, which are triggers native to the data distributions the model will be trained on and deployed in. We outline a simple procedure for embedding these, and other, triggers in deep reinforcement learning agents following a multi-task learning paradigm, and demonstrate in three common reinforcement learning environments. We believe that this work has important implications for the security of deep learning models.

Tasks

Reproductions