SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 5160 of 523 papers

TitleStatusHype
BadEdit: Backdooring large language models by model editingCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attack against Speaker VerificationCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Defending Against Backdoor Attacks in Natural Language GenerationCode1
Show:102550
← PrevPage 6 of 53Next →

No leaderboard results yet.