SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 4150 of 523 papers

TitleStatusHype
BadMerging: Backdoor Attacks Against Model MergingCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attack against Speaker VerificationCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Composite Backdoor Attacks Against Large Language ModelsCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Show:102550
← PrevPage 5 of 53Next →

No leaderboard results yet.