SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 111120 of 523 papers

TitleStatusHype
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Influencer Backdoor Attack on Semantic SegmentationCode1
Backdoor Attack against Speaker VerificationCode1
BadEdit: Backdooring large language models by model editingCode1
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identificationCode0
Invisible Backdoor Triggers in Image Editing Model via Deep WatermarkingCode0
Backdooring Bias into Text-to-Image ModelsCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
BadDet: Backdoor Attacks on Object DetectionCode0
Backdoor Attack against One-Class Sequential Anomaly Detection ModelsCode0
Show:102550
← PrevPage 12 of 53Next →

No leaderboard results yet.