SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 491500 of 523 papers

TitleStatusHype
Stealthy Backdoors as Compression ArtifactsCode0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Backdoor Attack in the Physical World0
PointBA: Towards Backdoor Attacks in 3D Point Cloud0
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Hidden Backdoor Attack against Semantic Segmentation Models0
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
Show:102550
← PrevPage 50 of 53Next →

No leaderboard results yet.