SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 501510 of 523 papers

TitleStatusHype
LaserGuider: A Laser Based Physical Backdoor Attack against Deep Neural Networks0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems0
LoBAM: LoRA-Based Backdoor Attack on Model Merging0
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations0
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm0
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning0
Show:102550
← PrevPage 51 of 53Next →

No leaderboard results yet.