SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 481490 of 523 papers

TitleStatusHype
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
Adversarial Feature Map Pruning for BackdoorCode0
Few-shot Backdoor Attacks via Neural Tangent KernelsCode0
Attacks on fairness in Federated LearningCode0
Enhancing Backdoor Attacks with Multi-Level MMD RegularizationCode0
Towards Adversarial Robustness And Backdoor Mitigation in SSLCode0
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients InspectionCode0
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting DataCode0
Claim-Guided Textual Backdoor Attack for Practical ApplicationsCode0
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural BackdoorCode0
Show:102550
← PrevPage 49 of 53Next →

No leaderboard results yet.