SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201210 of 523 papers

TitleStatusHype
Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification TasksCode0
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm0
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Whispers in Grammars: Injecting Covert Backdoors to Compromise Dense Retrieval SystemsCode0
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery DetectionCode1
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based AgentsCode2
Backdoor Attack against One-Class Sequential Anomaly Detection ModelsCode0
Test-Time Backdoor Attacks on Multimodal Large Language ModelsCode2
Show:102550
← PrevPage 21 of 53Next →

No leaderboard results yet.