SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 151160 of 523 papers

TitleStatusHype
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Invisible Backdoor Attacks on Diffusion ModelsCode1
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
Towards Unified Robustness Against Both Backdoor and Adversarial AttacksCode0
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Cross-Context Backdoor Attacks against Graph Prompt LearningCode0
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation ModelsCode0
Partial train and isolate, mitigate backdoor attack0
Show:102550
← PrevPage 16 of 53Next →

No leaderboard results yet.