SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 461470 of 523 papers

TitleStatusHype
Handcrafted Backdoors in Deep Neural Networks0
Defending Against Backdoor Attacks in Natural Language GenerationCode1
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
Backdoor Attacks on Self-Supervised LearningCode1
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Stealthy Backdoors as Compression ArtifactsCode0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Show:102550
← PrevPage 47 of 53Next →

No leaderboard results yet.