SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 361370 of 523 papers

TitleStatusHype
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class0
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Understanding Impacts of Task Similarity on Backdoor Attack and Detection0
Few-shot Backdoor Attacks via Neural Tangent KernelsCode0
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning DatasetsCode1
Where to Attack: A Dynamic Locator Model for Backdoor Attack in Text ClassificationsCode0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
TrojViT: Trojan Insertion in Vision TransformersCode1
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning0
Bidirectional Contrastive Split Learning for Visual Question Answering0
Show:102550
← PrevPage 37 of 53Next →

No leaderboard results yet.