SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 411420 of 523 papers

TitleStatusHype
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Training with More Confidence: Mitigating Injected and Natural Backdoors During TrainingCode1
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
Neighboring Backdoor Attacks on Graph Convolutional Network0
Defending against Backdoor Attacks in Natural Language Generation0
Show:102550
← PrevPage 42 of 53Next →

No leaderboard results yet.