SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 1120 of 523 papers

TitleStatusHype
Backdoor Attack with Sparse and Invisible TriggerCode1
Backdoor Attacks to Graph Neural NetworksCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks Against Dataset DistillationCode1
Backdoor Attacks on Crowd CountingCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Backdoor Attack against Speaker VerificationCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Show:102550
← PrevPage 2 of 53Next →

No leaderboard results yet.