SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451500 of 523 papers

TitleStatusHype
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Poison Ink: Robust and Invisible Backdoor AttackCode1
Rethinking Stealthiness of Backdoor Attack against NLP ModelsCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
BadNL: Backdoor Attacks Against NLP Models0
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from ScratchCode1
Handcrafted Backdoors in Deep Neural Networks0
Defending Against Backdoor Attacks in Natural Language GenerationCode1
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
Backdoor Attacks on Self-Supervised LearningCode1
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Stealthy Backdoors as Compression ArtifactsCode0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Backdoor Attack in the Physical World0
PointBA: Towards Backdoor Attacks in 3D Point Cloud0
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Hidden Backdoor Attack against Semantic Segmentation Models0
Targeted Attack against Deep Neural Networks via Flipping Limited Weight BitsCode1
WaNet -- Imperceptible Warping-based Backdoor AttackCode1
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor AttacksCode1
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
WaNet - Imperceptible Warping-based Backdoor AttackCode1
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
Backdoor Attacks on the DNN Interpretation System0
ONION: A Simple and Effective Defense Against Textual Backdoor AttacksCode1
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
Backdoor Attack against Speaker VerificationCode1
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Input-Aware Dynamic Backdoor AttackCode1
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems0
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free CasesCode1
Backdoor Learning: A SurveyCode2
Deep Learning Backdoors0
Show:102550
← PrevPage 10 of 11Next →

No leaderboard results yet.