SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 341350 of 523 papers

TitleStatusHype
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Robust Backdoor Attacks on Object Detection in Real World0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Manipulating and Mitigating Generative Model Biases without Retraining0
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
Show:102550
← PrevPage 35 of 53Next →

No leaderboard results yet.