SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 211220 of 523 papers

TitleStatusHype
BITE: Textual Backdoor Attacks with Iterative Trigger InjectionCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identificationCode0
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF FingerprintingCode0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
Backdoor Attack in the Physical World0
BadNL: Backdoor Attacks Against NLP Models0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
Show:102550
← PrevPage 22 of 53Next →

No leaderboard results yet.