SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 91100 of 523 papers

TitleStatusHype
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Backdoor Defense via Deconfounded Representation LearningCode1
Label Poisoning is All You NeedCode1
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
BadEdit: Backdooring large language models by model editingCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Neurotoxin: Durable Backdoors in Federated LearningCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
On the Vulnerability of Backdoor Defenses for Federated LearningCode1
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Show:102550
← PrevPage 10 of 53Next →

No leaderboard results yet.