SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 311320 of 523 papers

TitleStatusHype
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control0
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Does Few-shot Learning Suffer from Backdoor Attacks?0
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?0
A clean-label graph backdoor attack method in node classification task0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Punctuation Matters! Stealthy Backdoor Attack for Language Models0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Show:102550
← PrevPage 32 of 53Next →

No leaderboard results yet.