SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 381390 of 523 papers

TitleStatusHype
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Defending Against Backdoor Attacks by Layer-wise Feature AnalysisCode0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
On Feasibility of Server-side Backdoor Attacks on Split Learning0
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks0
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Show:102550
← PrevPage 39 of 53Next →

No leaderboard results yet.