SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 4150 of 523 papers

TitleStatusHype
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attacks Against Dataset DistillationCode1
Backdoor Attacks to Graph Neural NetworksCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Backdoor Attack with Sparse and Invisible TriggerCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Backdoor Defense via Deconfounded Representation LearningCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Show:102550
← PrevPage 5 of 53Next →

No leaderboard results yet.