SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 251275 of 523 papers

TitleStatusHype
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Robust Backdoor Attacks on Object Detection in Real World0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Manipulating and Mitigating Generative Model Biases without Retraining0
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Streamlined Federated Unlearning: Unite as One to Be Highly Efficient0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data0
Show:102550
← PrevPage 11 of 21Next →

No leaderboard results yet.