SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 521523 of 523 papers

TitleStatusHype
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural NetworksCode0
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations0
Show:102550
← PrevPage 53 of 53Next →

No leaderboard results yet.