SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 181190 of 523 papers

TitleStatusHype
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam DetectionCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
Manipulating and Mitigating Generative Model Biases without Retraining0
Exploring Backdoor Vulnerabilities of Chat ModelsCode1
Backdoor Attack on Multilingual Machine Translation0
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-PartitioningCode1
Generating Potent Poisons and Backdoors from Scratch with Guided DiffusionCode1
Towards Adversarial Robustness And Backdoor Mitigation in SSLCode0
Show:102550
← PrevPage 19 of 53Next →

No leaderboard results yet.