SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 91100 of 523 papers

TitleStatusHype
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-PartitioningCode1
Backdoor Defense via Deconfounded Representation LearningCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
PoisonPrompt: Backdoor Attack on Prompt-based Large Language ModelsCode1
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free CasesCode1
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery DetectionCode1
Show:102550
← PrevPage 10 of 53Next →

No leaderboard results yet.