SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 481490 of 523 papers

TitleStatusHype
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
BadNL: Backdoor Attacks Against NLP Models0
Handcrafted Backdoors in Deep Neural Networks0
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Show:102550
← PrevPage 49 of 53Next →

No leaderboard results yet.