SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 401410 of 523 papers

TitleStatusHype
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection0
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
Krait: A Backdoor Attack Against Graph Prompt Tuning0
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm0
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
LaserGuider: A Laser Based Physical Backdoor Attack against Deep Neural Networks0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems0
LoBAM: LoRA-Based Backdoor Attack on Model Merging0
Show:102550
← PrevPage 41 of 53Next →

No leaderboard results yet.