SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 1120 of 523 papers

TitleStatusHype
CL-Attack: Textual Backdoor Attacks via Cross-Lingual TriggersCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
Uncertainty is Fragile: Manipulating Uncertainty in Large Language ModelsCode1
T2IShield: Defending Against Backdoors on Text-to-Image Diffusion ModelsCode1
Invisible Backdoor Attacks on Diffusion ModelsCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Towards Imperceptible Backdoor Attack in Self-supervised LearningCode1
Rethinking Graph Backdoor Attacks: A Distribution-Preserving PerspectiveCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
Show:102550
← PrevPage 2 of 53Next →

No leaderboard results yet.