SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 211220 of 523 papers

TitleStatusHype
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World0
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models0
Does Few-shot Learning Suffer from Backdoor Attacks?0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
A4O: All Trigger for One sample0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
Show:102550
← PrevPage 22 of 53Next →

No leaderboard results yet.