SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 141150 of 523 papers

TitleStatusHype
Venomancer: Towards Imperceptible and Target-on-Demand Backdoor Attacks in Federated LearningCode0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
Backdooring Bias into Text-to-Image ModelsCode0
Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach0
Federated Learning with Flexible Architectures0
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong DetectionCode2
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning0
Show:102550
← PrevPage 15 of 53Next →

No leaderboard results yet.