SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 221230 of 523 papers

TitleStatusHype
Effective backdoor attack on graph neural networks in link prediction tasks0
Object-oriented backdoor attack against image captioning0
Spy-Watermark: Robust Invisible Watermarking for Backdoor AttackCode0
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers0
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control0
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransfomersCode1
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Does Few-shot Learning Suffer from Backdoor Attacks?0
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?0
A clean-label graph backdoor attack method in node classification task0
Show:102550
← PrevPage 23 of 53Next →

No leaderboard results yet.