SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 326350 of 523 papers

TitleStatusHype
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends0
A Knowledge Distillation-Based Backdoor Attack in Federated Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
An Invisible Backdoor Attack Based On Semantic Feature0
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences0
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning0
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
A Robust Attack: Displacement Backdoor Attack0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
A semantic backdoor attack against Graph Convolutional Networks0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
A Survey on Backdoor Attack and Defense in Natural Language Processing0
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection0
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Backdoor Attack and Defense for Deep Regression0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
Show:102550
← PrevPage 14 of 21Next →

No leaderboard results yet.