SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 131140 of 523 papers

TitleStatusHype
Krait: A Backdoor Attack Against Graph Prompt Tuning0
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
Uncertainty is Fragile: Manipulating Uncertainty in Large Language ModelsCode1
Backdoor Attacks against Image-to-Image Networks0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
T2IShield: Defending Against Backdoors on Text-to-Image Diffusion ModelsCode1
Backdoor Graph CondensationCode0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
Show:102550
← PrevPage 14 of 53Next →

No leaderboard results yet.