SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 441450 of 523 papers

TitleStatusHype
How to Craft Backdoors with Unlabeled Data Alone?Code0
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine UnlearningCode0
BadDet: Backdoor Attacks on Object DetectionCode0
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade DefenseCode0
BITE: Textual Backdoor Attacks with Iterative Trigger InjectionCode0
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNNCode0
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Cross-Context Backdoor Attacks against Graph Prompt LearningCode0
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Backdoor Attack against One-Class Sequential Anomaly Detection ModelsCode0
Show:102550
← PrevPage 45 of 53Next →

No leaderboard results yet.