SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 321330 of 523 papers

TitleStatusHype
Defending Against Backdoor Attacks by Layer-wise Feature AnalysisCode0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
On Feasibility of Server-side Backdoor Attacks on Split Learning0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
Unnoticeable Backdoor Attacks on Graph Neural NetworksCode1
Training-free Lexical Backdoor Attacks on Language ModelsCode0
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks0
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Show:102550
← PrevPage 33 of 53Next →

No leaderboard results yet.