SOTAVerified

Backdoor Defense through Self-Supervised and Generative Learning

2024-09-02Unverified0· sign in to hype

Ivan Sabolić, Ivan Grubišić, Siniša Šegvić

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Backdoor attacks change a small portion of training data by introducing hand-crafted triggers and rewiring the corresponding labels towards a desired target class. Training on such data injects a backdoor which causes malicious inference in selected test samples. Most defenses mitigate such attacks through various modifications of the discriminative learning procedure. In contrast, this paper explores an approach based on generative modelling of per-class distributions in a self-supervised representation space. Interestingly, these representations get either preserved or heavily disturbed under recent backdoor attacks. In both cases, we find that per-class generative models allow to detect poisoned data and cleanse the dataset. Experiments show that training on cleansed dataset greatly reduces the attack success rate and retains the accuracy on benign inputs.

Tasks

Reproductions