SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 261270 of 523 papers

TitleStatusHype
Partial train and isolate, mitigate backdoor attack0
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language ModelsCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
An Invisible Backdoor Attack Based On Semantic Feature0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
Show:102550
← PrevPage 27 of 53Next →

No leaderboard results yet.