SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 431440 of 523 papers

TitleStatusHype
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences0
Triggerless Backdoor Attack for NLP Tasks with Clean LabelsCode1
Enhancing Backdoor Attacks with Multi-Level MMD RegularizationCode0
Backdoor Pre-trained Models Can Transfer to AllCode0
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial OutcomesCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style TransferCode1
Widen The Backdoor To Let More Attackers In0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
Show:102550
← PrevPage 44 of 53Next →

No leaderboard results yet.