SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 5160 of 523 papers

TitleStatusHype
Backdoor Attacks Against Dataset DistillationCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Exploring Backdoor Vulnerabilities of Chat ModelsCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
Backdoor Attack against Speaker VerificationCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Show:102550
← PrevPage 6 of 53Next →

No leaderboard results yet.