SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 411420 of 523 papers

TitleStatusHype
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations0
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm0
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning0
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems0
Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
Show:102550
← PrevPage 42 of 53Next →

No leaderboard results yet.