SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 501510 of 523 papers

TitleStatusHype
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
Backdoor Attacks on the DNN Interpretation System0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems0
Deep Learning Backdoors0
Natural Backdoor Attack on Text Data0
Show:102550
← PrevPage 51 of 53Next →

No leaderboard results yet.