SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 151200 of 523 papers

TitleStatusHype
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Invisible Backdoor Attacks on Diffusion ModelsCode1
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Towards Unified Robustness Against Both Backdoor and Adversarial AttacksCode0
Cross-Context Backdoor Attacks against Graph Prompt LearningCode0
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation ModelsCode0
Partial train and isolate, mitigate backdoor attack0
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark0
Towards Imperceptible Backdoor Attack in Self-supervised LearningCode1
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language ModelsCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
An Invisible Backdoor Attack Based On Semantic Feature0
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
Rethinking Graph Backdoor Attacks: A Distribution-Preserving PerspectiveCode1
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Towards Robust Physical-world Backdoor Attacks on Lane Detection0
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning0
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving0
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam DetectionCode0
How to Craft Backdoors with Unlabeled Data Alone?Code0
Manipulating and Mitigating Generative Model Biases without Retraining0
Exploring Backdoor Vulnerabilities of Chat ModelsCode1
Backdoor Attack on Multilingual Machine Translation0
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-PartitioningCode1
Generating Potent Poisons and Backdoors from Scratch with Guided DiffusionCode1
Towards Adversarial Robustness And Backdoor Mitigation in SSLCode0
Mask-based Invisible Backdoor Attacks on Object DetectionCode1
BadEdit: Backdooring large language models by model editingCode1
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack0
Invisible Backdoor Attack Through Singular Value Decomposition0
Backdoor Attack with Mode Mixture Latent Modification0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself0
A general approach to enhance the survivability of backdoor attacks by decision path couplingCode0
SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic TransferCode0
Show:102550
← PrevPage 4 of 11Next →

No leaderboard results yet.