SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 126150 of 523 papers

TitleStatusHype
Backdoor Attack against One-Class Sequential Anomaly Detection ModelsCode0
Towards Adversarial Robustness And Backdoor Mitigation in SSLCode0
Model-Contrastive Learning for Backdoor DefenseCode0
Attacking by Aligning: Clean-Label Backdoor Attacks on Object DetectionCode0
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Under-confidence Backdoors Are Resilient and Stealthy BackdoorsCode0
Learning to Backdoor Federated LearningCode0
Backdoor Pre-trained Models Can Transfer to AllCode0
Invisible Backdoor Triggers in Image Editing Model via Deep WatermarkingCode0
Link-Backdoor: Backdoor Attack on Link Prediction via Node InjectionCode0
MDTD: A Multi Domain Trojan Detector for Deep Neural NetworksCode0
Attacks on fairness in Federated LearningCode0
Backdooring Bias into Text-to-Image ModelsCode0
Backdoor Graph CondensationCode0
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial BiasCode0
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade DefenseCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identificationCode0
OrderBkd: Textual backdoor attack through repositioningCode0
Enhancing Backdoor Attacks with Multi-Level MMD RegularizationCode0
Beating Backdoor Attack at Its Own GameCode0
Going In Style: Audio Backdoors Through Stylistic TransformationsCode0
BackdoorBench: A Comprehensive Benchmark of Backdoor LearningCode0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor LearningCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Show:102550
← PrevPage 6 of 21Next →

No leaderboard results yet.