SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 51100 of 523 papers

TitleStatusHype
PoisonPrompt: Backdoor Attack on Prompt-based Large Language ModelsCode1
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free CasesCode1
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor AttacksCode1
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural NetworksCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
Backdoor Attack against Speaker VerificationCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Targeted Attack against Deep Neural Networks via Flipping Limited Weight BitsCode1
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Towards Imperceptible Backdoor Attack in Self-supervised LearningCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Backdoor Attacks Against Dataset DistillationCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning ModelsCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Clean-Label Backdoor Attacks on Video Recognition ModelsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Defending Against Backdoor Attacks in Natural Language GenerationCode1
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Backdoor Attacks on Crowd CountingCode1
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attacks on Self-Supervised LearningCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Backdoor Attacks to Graph Neural NetworksCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Graph BackdoorCode1
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attack with Sparse and Invisible TriggerCode1
Imperceptible Backdoor Attack: From Input Space to Feature RepresentationCode1
Influencer Backdoor Attack on Semantic SegmentationCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Backdoor Defense via Deconfounded Representation LearningCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Label Poisoning is All You NeedCode1
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Neurotoxin: Durable Backdoors in Federated LearningCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
On the Vulnerability of Backdoor Defenses for Federated LearningCode1
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning DatasetsCode1
Show:102550
← PrevPage 2 of 11Next →

No leaderboard results yet.