SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 8190 of 523 papers

TitleStatusHype
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial OutcomesCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style TransferCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Rethinking Stealthiness of Backdoor Attack against NLP ModelsCode1
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from ScratchCode1
Show:102550
← PrevPage 9 of 53Next →

No leaderboard results yet.