SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 3140 of 523 papers

TitleStatusHype
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
BadEdit: Backdooring large language models by model editingCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
Backdoor Attacks Against Dataset DistillationCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Show:102550
← PrevPage 4 of 53Next →

No leaderboard results yet.