SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301310 of 523 papers

TitleStatusHype
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning0
Inferring Properties of Graph Neural Networks0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep Neural Networks0
Effective backdoor attack on graph neural networks in link prediction tasks0
Object-oriented backdoor attack against image captioning0
Spy-Watermark: Robust Invisible Watermarking for Backdoor AttackCode0
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers0
Show:102550
← PrevPage 31 of 53Next →

No leaderboard results yet.