SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 281290 of 523 papers

TitleStatusHype
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
Show:102550
← PrevPage 29 of 53Next →

No leaderboard results yet.