SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301325 of 523 papers

TitleStatusHype
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning0
Inferring Properties of Graph Neural Networks0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep Neural Networks0
Effective backdoor attack on graph neural networks in link prediction tasks0
Object-oriented backdoor attack against image captioning0
Spy-Watermark: Robust Invisible Watermarking for Backdoor AttackCode0
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers0
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control0
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Does Few-shot Learning Suffer from Backdoor Attacks?0
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?0
A clean-label graph backdoor attack method in node classification task0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Punctuation Matters! Stealthy Backdoor Attack for Language Models0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger0
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT40
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective0
Attacks on fairness in Federated LearningCode0
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Show:102550
← PrevPage 13 of 21Next →

No leaderboard results yet.