SOTAVerified

PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models

2024-05-28Code Available1· sign in to hype

Omead Pooladzandi, Jeffrey Jiang, Sunay Bhat, Gregory Pottie

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Data poisoning attacks pose a significant threat to the integrity of machine learning models by leading to misclassification of target distribution data by injecting adversarial examples during training. Existing state-of-the-art (SoTA) defense methods suffer from limitations, such as significantly reduced generalization performance and significant overhead during training, making them impractical or limited for real-world applications. In response to this challenge, we introduce a universal data purification method that defends naturally trained classifiers from malicious white-, gray-, and black-box image poisons by applying a universal stochastic preprocessing step _T(x), realized by iterative Langevin sampling of a convergent Energy Based Model (EBM) initialized with an image x. Mid-run dynamics of _T(x) purify poison information with minimal impact on features important to the generalization of a classifier network. We show that EBMs remain universal purifiers, even in the presence of poisoned EBM training data, and achieve SoTA defense on leading triggered and triggerless poisons. This work is a subset of a larger framework introduced in with a more detailed focus on EBM purification and poison defense.

Tasks

Reproductions