SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 121130 of 523 papers

TitleStatusHype
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
BadMerging: Backdoor Attacks Against Model MergingCode1
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt LearningCode2
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion ModelsCode0
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Show:102550
← PrevPage 13 of 53Next →

No leaderboard results yet.