SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 381390 of 523 papers

TitleStatusHype
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
BadHMP: Backdoor Attack against Human Motion Prediction0
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
BadNL: Backdoor Attacks Against NLP Models0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
Show:102550
← PrevPage 39 of 53Next →

No leaderboard results yet.