SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 171180 of 523 papers

TitleStatusHype
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Injecting Bias into Text Classification Models using Backdoor Attacks0
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification0
BadSAD: Clean-Label Backdoor Attacks against Deep Semi-Supervised Anomaly Detection0
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion ModelsCode0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Show:102550
← PrevPage 18 of 53Next →

No leaderboard results yet.