SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 161170 of 523 papers

TitleStatusHype
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine UnlearningCode0
A Robust Attack: Displacement Backdoor Attack0
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting DataCode0
Scanning Trojaned Models Using Out-of-Distribution SamplesCode0
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement LearningCode0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on Retrieval-Augmented Diffusion Models0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
Energy Backdoor Attack to Deep Neural NetworksCode0
A4O: All Trigger for One sample0
Show:102550
← PrevPage 17 of 53Next →

No leaderboard results yet.