SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 251300 of 523 papers

TitleStatusHype
Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach0
Federated Learning with Flexible Architectures0
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning0
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
Cross-Context Backdoor Attacks against Graph Prompt LearningCode0
Towards Unified Robustness Against Both Backdoor and Adversarial AttacksCode0
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation ModelsCode0
Partial train and isolate, mitigate backdoor attack0
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language ModelsCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
An Invisible Backdoor Attack Based On Semantic Feature0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving0
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam DetectionCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
Backdoor Attack on Multilingual Machine Translation0
Manipulating and Mitigating Generative Model Biases without Retraining0
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Towards Adversarial Robustness And Backdoor Mitigation in SSLCode0
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack0
Invisible Backdoor Attack Through Singular Value Decomposition0
Backdoor Attack with Mode Mixture Latent Modification0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself0
A general approach to enhance the survivability of backdoor attacks by decision path couplingCode0
SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic TransferCode0
Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification TasksCode0
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm0
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
Whispers in Grammars: Injecting Covert Backdoors to Compromise Dense Retrieval SystemsCode0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Backdoor Attack against One-Class Sequential Anomaly Detection ModelsCode0
OrderBkd: Textual backdoor attack through repositioningCode0
The last Dance : Robust backdoor attack via diffusion models and bayesian approach0
Show:102550
← PrevPage 6 of 11Next →

No leaderboard results yet.