SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 401450 of 523 papers

TitleStatusHype
Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape0
Behavior Backdoor for Deep Learning Models0
Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning0
BFClass: A Backdoor-free Text Classification Framework0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Deep Learning Backdoors0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
Defending against Backdoor Attacks in Natural Language Generation0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Defending Backdoor Attacks on Vision Transformer via Patch Processing0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective0
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models0
Does Few-shot Learning Suffer from Backdoor Attacks?0
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks0
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger0
Show:102550
← PrevPage 9 of 11Next →

No leaderboard results yet.