SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 131140 of 523 papers

TitleStatusHype
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identificationCode0
Backdooring Bias into Text-to-Image ModelsCode0
Backdoor Pre-trained Models Can Transfer to AllCode0
Under-confidence Backdoors Are Resilient and Stealthy BackdoorsCode0
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade DefenseCode0
Attacks on fairness in Federated LearningCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
Backdoor Graph CondensationCode0
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial BiasCode0
Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic ClassificationCode0
Show:102550
← PrevPage 14 of 53Next →

No leaderboard results yet.