SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 161170 of 523 papers

TitleStatusHype
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
Towards Imperceptible Backdoor Attack in Self-supervised LearningCode1
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language ModelsCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
An Invisible Backdoor Attack Based On Semantic Feature0
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
Rethinking Graph Backdoor Attacks: A Distribution-Preserving PerspectiveCode1
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Show:102550
← PrevPage 17 of 53Next →

No leaderboard results yet.