SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 361370 of 523 papers

TitleStatusHype
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
Defending against Insertion-based Textual Backdoor Attacks via AttributionCode0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models0
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients InspectionCode0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Show:102550
← PrevPage 37 of 53Next →

No leaderboard results yet.