SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 2130 of 523 papers

TitleStatusHype
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Exploring Backdoor Vulnerabilities of Chat ModelsCode1
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-PartitioningCode1
Generating Potent Poisons and Backdoors from Scratch with Guided DiffusionCode1
BadEdit: Backdooring large language models by model editingCode1
Mask-based Invisible Backdoor Attacks on Object DetectionCode1
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery DetectionCode1
Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding IndistinguishabilityCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransfomersCode1
Show:102550
← PrevPage 3 of 53Next →

No leaderboard results yet.