SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201210 of 523 papers

TitleStatusHype
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Defending Backdoor Attacks on Vision Transformer via Patch Processing0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
A semantic backdoor attack against Graph Convolutional Networks0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
Show:102550
← PrevPage 21 of 53Next →

No leaderboard results yet.