SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 2130 of 523 papers

TitleStatusHype
Backdoor Attacks Against Dataset DistillationCode1
Backdoor Attack against Speaker VerificationCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
BadEdit: Backdooring large language models by model editingCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Show:102550
← PrevPage 3 of 53Next →

No leaderboard results yet.