SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 481490 of 523 papers

TitleStatusHype
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor AttacksCode1
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
WaNet - Imperceptible Warping-based Backdoor AttackCode1
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
Backdoor Attacks on the DNN Interpretation System0
ONION: A Simple and Effective Defense Against Textual Backdoor AttacksCode1
Show:102550
← PrevPage 49 of 53Next →

No leaderboard results yet.