SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 51100 of 523 papers

TitleStatusHype
Backdoor Defense via Deconfounded Representation LearningCode1
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Unnoticeable Backdoor Attacks on Graph Neural NetworksCode1
On the Vulnerability of Backdoor Defenses for Federated LearningCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Backdoor Attacks Against Dataset DistillationCode1
How to Backdoor Diffusion Models?Code1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Untargeted Backdoor Attack against Object DetectionCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning DatasetsCode1
TrojViT: Trojan Insertion in Vision TransformersCode1
Imperceptible and Robust Backdoor Attack in 3D Point CloudCode1
Backdoor Attacks on Crowd CountingCode1
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
Neurotoxin: Durable Backdoors in Federated LearningCode1
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin StatisticCode1
Imperceptible Backdoor Attack: From Input Space to Feature RepresentationCode1
Narcissus: A Practical Clean-Label Backdoor Attack with Limited InformationCode1
Training with More Confidence: Mitigating Injected and Natural Backdoors During TrainingCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
FIBA: Frequency-Injection based Backdoor Attack in Medical Image AnalysisCode1
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural NetworksCode1
Triggerless Backdoor Attack for NLP Tasks with Clean LabelsCode1
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial OutcomesCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style TransferCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Rethinking Stealthiness of Backdoor Attack against NLP ModelsCode1
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from ScratchCode1
Defending Against Backdoor Attacks in Natural Language GenerationCode1
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
Backdoor Attacks on Self-Supervised LearningCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Targeted Attack against Deep Neural Networks via Flipping Limited Weight BitsCode1
WaNet -- Imperceptible Warping-based Backdoor AttackCode1
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor AttacksCode1
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
WaNet - Imperceptible Warping-based Backdoor AttackCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
Show:102550
← PrevPage 2 of 11Next →

No leaderboard results yet.