SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 1120 of 523 papers

TitleStatusHype
Backdoor Defense via Deconfounded Representation LearningCode1
Backdoor Attack with Sparse and Invisible TriggerCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attacks Against Dataset DistillationCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Backdoor Attack against Speaker VerificationCode1
Show:102550
← PrevPage 2 of 53Next →

No leaderboard results yet.