SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 351375 of 523 papers

TitleStatusHype
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Streamlined Federated Unlearning: Unite as One to Be Highly Efficient0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data0
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment0
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition0
TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep Neural Networks0
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks0
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers0
The last Dance : Robust backdoor attack via diffusion models and bayesian approach0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
Show:102550
← PrevPage 15 of 21Next →

No leaderboard results yet.