SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 7180 of 523 papers

TitleStatusHype
Data Free Backdoor AttacksCode0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer0
LaserGuider: A Laser Based Physical Backdoor Attack against Deep Neural Networks0
PBP: Post-training Backdoor Purification for Malware ClassifiersCode0
Behavior Backdoor for Deep Learning Models0
Streamlined Federated Unlearning: Unite as One to Be Highly Efficient0
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm0
BadScan: An Architectural Backdoor Attack on Visual State Space Models0
Show:102550
← PrevPage 8 of 53Next →

No leaderboard results yet.