SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 3140 of 523 papers

TitleStatusHype
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Universal Jailbreak Backdoors from Poisoned Human FeedbackCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Label Poisoning is All You NeedCode1
PoisonPrompt: Backdoor Attack on Prompt-based Large Language ModelsCode1
Composite Backdoor Attacks Against Large Language ModelsCode1
VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language ModelsCode1
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model ModificationCode1
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt InjectionCode1
Show:102550
← PrevPage 4 of 53Next →

No leaderboard results yet.