SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 251260 of 523 papers

TitleStatusHype
A Robust Attack: Displacement Backdoor Attack0
FIGhost: Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Show:102550
← PrevPage 26 of 53Next →

No leaderboard results yet.