SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 271280 of 523 papers

TitleStatusHype
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving0
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam DetectionCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
Backdoor Attack on Multilingual Machine Translation0
Show:102550
← PrevPage 28 of 53Next →

No leaderboard results yet.