SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 231240 of 523 papers

TitleStatusHype
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Punctuation Matters! Stealthy Backdoor Attack for Language Models0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective0
Universal Jailbreak Backdoors from Poisoned Human FeedbackCode1
Attacks on fairness in Federated LearningCode0
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Show:102550
← PrevPage 24 of 53Next →

No leaderboard results yet.