SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 376400 of 523 papers

TitleStatusHype
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair0
Versatile Weight Attack via Flipping Limited BitsCode0
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment0
Backdoor Attacks on Crowd CountingCode1
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning0
Defending Backdoor Attacks on Vision Transformer via Patch Processing0
Transferable Graph Backdoor Attack0
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection0
Neurotoxin: Durable Backdoors in Federated LearningCode1
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
BadDet: Backdoor Attacks on Object DetectionCode0
BagFlip: A Certified Defense against Data PoisoningCode0
BITE: Textual Backdoor Attacks with Iterative Trigger InjectionCode0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution0
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin StatisticCode1
Model-Contrastive Learning for Backdoor DefenseCode0
Imperceptible Backdoor Attack: From Input Space to Feature RepresentationCode1
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained ModelsCode0
Show:102550
← PrevPage 16 of 21Next →

No leaderboard results yet.