SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 5160 of 523 papers

TitleStatusHype
Scanning Trojaned Models Using Out-of-Distribution SamplesCode0
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement LearningCode0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on Retrieval-Augmented Diffusion Models0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
Energy Backdoor Attack to Deep Neural NetworksCode0
A4O: All Trigger for One sample0
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Show:102550
← PrevPage 6 of 53Next →

No leaderboard results yet.