SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 6170 of 523 papers

TitleStatusHype
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Backdoor Attacks Against Dataset DistillationCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
Show:102550
← PrevPage 7 of 53Next →

No leaderboard results yet.