SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 8190 of 523 papers

TitleStatusHype
Generating Potent Poisons and Backdoors from Scratch with Guided DiffusionCode1
Backdoor Attacks to Graph Neural NetworksCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
Hidden Trigger Backdoor AttacksCode1
Backdoor Attack with Sparse and Invisible TriggerCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Label Poisoning is All You NeedCode1
Show:102550
← PrevPage 9 of 53Next →

No leaderboard results yet.