SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 271280 of 523 papers

TitleStatusHype
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
You Can Backdoor Personalized Federated LearningCode1
Beating Backdoor Attack at Its Own GameCode0
Adversarial Feature Map Pruning for BackdoorCode0
Risk-optimized Outlier Removal for Robust 3D Point Cloud ClassificationCode1
Rethinking Backdoor Attacks0
Attacking by Aligning: Clean-Label Backdoor Attacks on Object DetectionCode0
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of SoundCode1
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives0
Show:102550
← PrevPage 28 of 53Next →

No leaderboard results yet.