SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101110 of 523 papers

TitleStatusHype
ONION: A Simple and Effective Defense Against Textual Backdoor AttacksCode1
Backdoor Attack against Speaker VerificationCode1
Input-Aware Dynamic Backdoor AttackCode1
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free CasesCode1
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural NetworksCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Graph BackdoorCode1
Backdoor Attacks to Graph Neural NetworksCode1
Show:102550
← PrevPage 11 of 53Next →

No leaderboard results yet.