SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 231240 of 523 papers

TitleStatusHype
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion ModelsCode0
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Krait: A Backdoor Attack Against Graph Prompt Tuning0
Backdoor Attacks against Image-to-Image Networks0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Show:102550
← PrevPage 24 of 53Next →

No leaderboard results yet.