SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 421430 of 523 papers

TitleStatusHype
Memory Backdoor Attacks on Neural Networks0
ME: Trigger Element Combination Backdoor Attack on Copyright Infringement0
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself0
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identificationCode0
Invisible Backdoor Triggers in Image Editing Model via Deep WatermarkingCode0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy BreachesCode0
A general approach to enhance the survivability of backdoor attacks by decision path couplingCode0
Adversarial examples are useful too!Code0
Show:102550
← PrevPage 43 of 53Next →

No leaderboard results yet.