SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 6170 of 523 papers

TitleStatusHype
Backdoor Attack with Sparse and Invisible TriggerCode1
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Defense via Deconfounded Representation LearningCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Backdoor Attacks Against Dataset DistillationCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
Show:102550
← PrevPage 7 of 53Next →

No leaderboard results yet.