SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 111120 of 523 papers

TitleStatusHype
DBA: Distributed Backdoor Attacks against Federated LearningCode1
Clean-Label Backdoor Attacks on Video Recognition ModelsCode1
Hidden Trigger Backdoor AttacksCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation0
Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
ME: Trigger Element Combination Backdoor Attack on Copyright Infringement0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
Show:102550
← PrevPage 12 of 53Next →

No leaderboard results yet.