SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 321330 of 523 papers

TitleStatusHype
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios0
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models0
Prompt Backdoors in Visual Prompt Learning0
Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
Punctuation Matters! Stealthy Backdoor Attack for Language Models0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning0
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
Show:102550
← PrevPage 33 of 53Next →

No leaderboard results yet.