SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 251275 of 523 papers

TitleStatusHype
Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach0
Federated Learning with Flexible Architectures0
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning0
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
Cross-Context Backdoor Attacks against Graph Prompt LearningCode0
Towards Unified Robustness Against Both Backdoor and Adversarial AttacksCode0
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation ModelsCode0
Partial train and isolate, mitigate backdoor attack0
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language ModelsCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
An Invisible Backdoor Attack Based On Semantic Feature0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
Show:102550
← PrevPage 11 of 21Next →

No leaderboard results yet.