SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 191200 of 523 papers

TitleStatusHype
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
Backdoor Attack with Imperceptible Input and Latent Modification0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
Defending against Backdoor Attacks in Natural Language Generation0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
Show:102550
← PrevPage 20 of 53Next →

No leaderboard results yet.