SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 471480 of 523 papers

TitleStatusHype
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Backdoor Attack in the Physical World0
PointBA: Towards Backdoor Attacks in 3D Point Cloud0
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Hidden Backdoor Attack against Semantic Segmentation Models0
Targeted Attack against Deep Neural Networks via Flipping Limited Weight BitsCode1
WaNet -- Imperceptible Warping-based Backdoor AttackCode1
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
Show:102550
← PrevPage 48 of 53Next →

No leaderboard results yet.