SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 6170 of 523 papers

TitleStatusHype
Invisible Backdoor Attack against Self-supervised LearningCode1
CL-Attack: Textual Backdoor Attacks via Cross-Lingual TriggersCode1
Injecting Bias into Text Classification Models using Backdoor Attacks0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger0
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification0
BadSAD: Clean-Label Backdoor Attacks against Deep Semi-Supervised Anomaly Detection0
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion ModelsCode0
Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable TriggerCode0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Show:102550
← PrevPage 7 of 53Next →

No leaderboard results yet.