SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 2130 of 523 papers

TitleStatusHype
Backdoor Attack with Sparse and Invisible TriggerCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Backdoor Attack against Speaker VerificationCode1
Backdoor Attacks on Self-Supervised LearningCode1
Backdoor Attacks Against Dataset DistillationCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Backdoor Defense via Deconfounded Representation LearningCode1
Show:102550
← PrevPage 3 of 53Next →

No leaderboard results yet.