SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301350 of 523 papers

TitleStatusHype
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
VSVC: Backdoor attack against Keyword Spotting based on Voiceprint Selection and Voice Conversion0
Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks0
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks0
Weak-to-Strong Backdoor Attack for Large Language Models0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
Widen The Backdoor To Let More Attackers In0
You Are Catching My Attention: Are Vision Transformers Bad Learners Under Backdoor Attacks?0
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
SPBA: Utilizing Speech Large Language Model for Backdoor Attacks on Speech Classification Models0
A4O: All Trigger for One sample0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Effective backdoor attack on graph neural networks in link prediction tasks0
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification0
A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
A clean-label graph backdoor attack method in node classification task0
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives0
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends0
A Knowledge Distillation-Based Backdoor Attack in Federated Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
An Invisible Backdoor Attack Based On Semantic Feature0
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences0
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning0
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
A Robust Attack: Displacement Backdoor Attack0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
A semantic backdoor attack against Graph Convolutional Networks0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
A Survey on Backdoor Attack and Defense in Natural Language Processing0
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection0
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Backdoor Attack and Defense for Deep Regression0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
Show:102550
← PrevPage 7 of 11Next →

No leaderboard results yet.