SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 281290 of 523 papers

TitleStatusHype
Manipulating and Mitigating Generative Model Biases without Retraining0
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Towards Adversarial Robustness And Backdoor Mitigation in SSLCode0
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack0
Invisible Backdoor Attack Through Singular Value Decomposition0
Backdoor Attack with Mode Mixture Latent Modification0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself0
Show:102550
← PrevPage 29 of 53Next →

No leaderboard results yet.