SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451460 of 523 papers

TitleStatusHype
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Poison Ink: Robust and Invisible Backdoor AttackCode1
Rethinking Stealthiness of Backdoor Attack against NLP ModelsCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
BadNL: Backdoor Attacks Against NLP Models0
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from ScratchCode1
Show:102550
← PrevPage 46 of 53Next →

No leaderboard results yet.