SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101110 of 523 papers

TitleStatusHype
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Label Poisoning is All You NeedCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
FIBA: Frequency-Injection based Backdoor Attack in Medical Image AnalysisCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Neurotoxin: Durable Backdoors in Federated LearningCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Rethinking Stealthiness of Backdoor Attack against NLP ModelsCode1
Show:102550
← PrevPage 11 of 53Next →

No leaderboard results yet.