SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 501523 of 523 papers

TitleStatusHype
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
Backdoor Attacks on the DNN Interpretation System0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems0
Deep Learning Backdoors0
Natural Backdoor Attack on Text Data0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
Adversarial examples are useful too!Code0
Rethinking the Trigger of Backdoor Attack0
Dynamic Backdoor Attacks Against Machine Learning Models0
On Certifying Robustness against Backdoor Attacks via Randomized Smoothing0
Defending against Backdoor Attack on Deep Neural Networks0
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks0
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations0
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy0
Defending Neural Backdoors via Generative Distribution ModelingCode0
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks0
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural NetworksCode0
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations0
Show:102550
← PrevPage 11 of 11Next →

No leaderboard results yet.