SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 326350 of 523 papers

TitleStatusHype
Punctuation Matters! Stealthy Backdoor Attack for Language Models0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning0
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
Rethinking Backdoor Attacks0
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective0
Rethinking the Trigger-injecting Position in Graph Backdoor Attack0
Rethinking the Trigger of Backdoor Attack0
Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing0
Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on Retrieval-Augmented Diffusion Models0
Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift0
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks0
Robo-Troj: Attacking LLM-based Task Planners0
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Robust Backdoor Attacks on Object Detection in Real World0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Manipulating and Mitigating Generative Model Biases without Retraining0
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
Show:102550
← PrevPage 14 of 21Next →

No leaderboard results yet.