SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 251300 of 523 papers

TitleStatusHype
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Robust Backdoor Attacks on Object Detection in Real World0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Manipulating and Mitigating Generative Model Biases without Retraining0
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Streamlined Federated Unlearning: Unite as One to Be Highly Efficient0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data0
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment0
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition0
TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep Neural Networks0
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks0
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers0
The last Dance : Robust backdoor attack via diffusion models and bayesian approach0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach0
Transferable Graph Backdoor Attack0
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning0
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
TrojanRobot: Physical-World Backdoor Attacks Against VLM-based Robotic Manipulation0
TrojVLM: Backdoor Attack Against Vision Language Models0
Understanding Impacts of Task Similarity on Backdoor Attack and Detection0
Bidirectional Contrastive Split Learning for Visual Question Answering0
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning0
Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation0
Show:102550
← PrevPage 6 of 11Next →

No leaderboard results yet.