SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 171180 of 523 papers

TitleStatusHype
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving0
Show:102550
← PrevPage 18 of 53Next →

No leaderboard results yet.