SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201225 of 523 papers

TitleStatusHype
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Defending Backdoor Attacks on Vision Transformer via Patch Processing0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
A semantic backdoor attack against Graph Convolutional Networks0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models0
Does Few-shot Learning Suffer from Backdoor Attacks?0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger0
Backdoor Attacks with Input-unique Triggers in NLP0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
A4O: All Trigger for One sample0
Show:102550
← PrevPage 9 of 21Next →

No leaderboard results yet.