SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 421430 of 523 papers

TitleStatusHype
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Deep Learning Backdoors0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
Show:102550
← PrevPage 43 of 53Next →

No leaderboard results yet.