SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 276300 of 523 papers

TitleStatusHype
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment0
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition0
TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep Neural Networks0
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks0
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers0
The last Dance : Robust backdoor attack via diffusion models and bayesian approach0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach0
Transferable Graph Backdoor Attack0
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning0
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
TrojanRobot: Physical-World Backdoor Attacks Against VLM-based Robotic Manipulation0
TrojVLM: Backdoor Attack Against Vision Language Models0
Understanding Impacts of Task Similarity on Backdoor Attack and Detection0
Bidirectional Contrastive Split Learning for Visual Question Answering0
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning0
Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation0
Show:102550
← PrevPage 12 of 21Next →

No leaderboard results yet.