SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 6170 of 523 papers

TitleStatusHype
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Untargeted Backdoor Attack against Object DetectionCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning DatasetsCode1
TrojViT: Trojan Insertion in Vision TransformersCode1
Imperceptible and Robust Backdoor Attack in 3D Point CloudCode1
Backdoor Attacks on Crowd CountingCode1
Show:102550
← PrevPage 7 of 53Next →

No leaderboard results yet.