SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 121130 of 523 papers

TitleStatusHype
BadHMP: Backdoor Attack against Human Motion Prediction0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
An Invisible Backdoor Attack Based On Semantic Feature0
BadNL: Backdoor Attacks Against NLP Models0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Backdoor Attack and Defense for Deep Regression0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Show:102550
← PrevPage 13 of 53Next →

No leaderboard results yet.