SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 471480 of 523 papers

TitleStatusHype
MSDT: Masked Language Model Scoring Defense in Text DomainCode0
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial BiasCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated LearningCode0
Beating Backdoor Attack at Its Own GameCode0
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural NetworksCode0
Towards Unified Robustness Against Both Backdoor and Adversarial AttacksCode0
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingCode0
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian NoiseCode0
Show:102550
← PrevPage 48 of 53Next →

No leaderboard results yet.