SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 411420 of 523 papers

TitleStatusHype
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Show:102550
← PrevPage 42 of 53Next →

No leaderboard results yet.