SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 191200 of 523 papers

TitleStatusHype
Mask-based Invisible Backdoor Attacks on Object DetectionCode1
BadEdit: Backdooring large language models by model editingCode1
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack0
Invisible Backdoor Attack Through Singular Value Decomposition0
Backdoor Attack with Mode Mixture Latent Modification0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself0
A general approach to enhance the survivability of backdoor attacks by decision path couplingCode0
SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic TransferCode0
Show:102550
← PrevPage 20 of 53Next →

No leaderboard results yet.