SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 226250 of 523 papers

TitleStatusHype
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransfomersCode1
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Does Few-shot Learning Suffer from Backdoor Attacks?0
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?0
A clean-label graph backdoor attack method in node classification task0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Punctuation Matters! Stealthy Backdoor Attack for Language Models0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective0
Universal Jailbreak Backdoors from Poisoned Human FeedbackCode1
Attacks on fairness in Federated LearningCode0
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Label Poisoning is All You NeedCode1
CBD: A Certified Backdoor Detector Based on Local Dominant ProbabilityCode0
PoisonPrompt: Backdoor Attack on Prompt-based Large Language ModelsCode1
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks0
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective0
Invisible Threats: Backdoor Attack in OCR Systems0
Composite Backdoor Attacks Against Large Language ModelsCode1
Show:102550
← PrevPage 10 of 21Next →

No leaderboard results yet.