SOTAVerified

Prototype Guided Backdoor Defense

2025-03-26Unverified0· sign in to hype

Venkat Adithya Amula, Sunayana Samavedam, Saurabh Saini, Avani Gupta, Narayanan P J

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning models are susceptible to backdoor attacks involving malicious attackers perturbing a small subset of training data with a trigger to causes misclassifications. Various triggers have been used, including semantic triggers that are easily realizable without requiring the attacker to manipulate the image. The emergence of generative AI has eased the generation of varied poisoned samples. Robustness across types of triggers is crucial to effective defense. We propose Prototype Guided Backdoor Defense (PGBD), a robust post-hoc defense that scales across different trigger types, including previously unsolved semantic triggers. PGBD exploits displacements in the geometric spaces of activations to penalize movements toward the trigger. This is done using a novel sanitization loss of a post-hoc fine-tuning step. The geometric approach scales easily to all types of attacks. PGBD achieves better performance across all settings. We also present the first defense against a new semantic attack on celebrity face images. Project page: https://venkatadithya9.github.io/pgbd.github.io/this https URL.

Tasks

Reproductions