SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 431440 of 523 papers

TitleStatusHype
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
Defending against Backdoor Attacks in Natural Language Generation0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Defending Backdoor Attacks on Vision Transformer via Patch Processing0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning0
Show:102550
← PrevPage 44 of 53Next →

No leaderboard results yet.