SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 511520 of 523 papers

TitleStatusHype
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems0
Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
Memory Backdoor Attacks on Neural Networks0
ME: Trigger Element Combination Backdoor Attack on Copyright Infringement0
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself0
Mitigating Backdoor Attack Via Prerequisite Transformation0
Moiré Backdoor Attack (MBA): A Novel Trigger for Pedestrian Detectors in the Physical World0
Show:102550
← PrevPage 52 of 53Next →

No leaderboard results yet.