SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 2650 of 523 papers

TitleStatusHype
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
BadEdit: Backdooring large language models by model editingCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
CL-Attack: Textual Backdoor Attacks via Cross-Lingual TriggersCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
Backdoor Attack against Speaker VerificationCode1
Backdoor Attacks on Crowd CountingCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attacks Against Dataset DistillationCode1
Backdoor Attacks to Graph Neural NetworksCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Backdoor Attack with Sparse and Invisible TriggerCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Backdoor Defense via Deconfounded Representation LearningCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Show:102550
← PrevPage 2 of 21Next →

No leaderboard results yet.