SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 7180 of 523 papers

TitleStatusHype
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
Neurotoxin: Durable Backdoors in Federated LearningCode1
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin StatisticCode1
Imperceptible Backdoor Attack: From Input Space to Feature RepresentationCode1
Narcissus: A Practical Clean-Label Backdoor Attack with Limited InformationCode1
Training with More Confidence: Mitigating Injected and Natural Backdoors During TrainingCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
FIBA: Frequency-Injection based Backdoor Attack in Medical Image AnalysisCode1
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural NetworksCode1
Triggerless Backdoor Attack for NLP Tasks with Clean LabelsCode1
Show:102550
← PrevPage 8 of 53Next →

No leaderboard results yet.