SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 91100 of 523 papers

TitleStatusHype
Defending Against Backdoor Attacks in Natural Language GenerationCode1
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
Backdoor Attacks on Self-Supervised LearningCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Targeted Attack against Deep Neural Networks via Flipping Limited Weight BitsCode1
WaNet -- Imperceptible Warping-based Backdoor AttackCode1
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor AttacksCode1
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
WaNet - Imperceptible Warping-based Backdoor AttackCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
Show:102550
← PrevPage 10 of 53Next →

No leaderboard results yet.