SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 171180 of 523 papers

TitleStatusHype
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Adversarial Feature Map Pruning for BackdoorCode0
Under-confidence Backdoors Are Resilient and Stealthy BackdoorsCode0
Learning to Backdoor Federated LearningCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
Few-shot Backdoor Attacks via Neural Tangent KernelsCode0
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients InspectionCode0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable TriggerCode0
Show:102550
← PrevPage 18 of 53Next →

No leaderboard results yet.