SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101110 of 523 papers

TitleStatusHype
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial OutcomesCode1
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
Risk-optimized Outlier Removal for Robust 3D Point Cloud ClassificationCode1
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
T2IShield: Defending Against Backdoors on Text-to-Image Diffusion ModelsCode1
Clean-Label Backdoor Attacks on Video Recognition ModelsCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Narcissus: A Practical Clean-Label Backdoor Attack with Limited InformationCode1
Show:102550
← PrevPage 11 of 53Next →

No leaderboard results yet.