SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 501510 of 523 papers

TitleStatusHype
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural NetworksCode1
Natural Backdoor Attack on Text Data0
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Graph BackdoorCode1
Backdoor Attacks to Graph Neural NetworksCode1
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
Adversarial examples are useful too!Code0
DBA: Distributed Backdoor Attacks against Federated LearningCode1
Rethinking the Trigger of Backdoor Attack0
Show:102550
← PrevPage 51 of 53Next →

No leaderboard results yet.