SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 401410 of 523 papers

TitleStatusHype
Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape0
Behavior Backdoor for Deep Learning Models0
Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning0
BFClass: A Backdoor-free Text Classification Framework0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
Show:102550
← PrevPage 41 of 53Next →

No leaderboard results yet.