SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 261270 of 523 papers

TitleStatusHype
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
BadNL: Backdoor Attacks Against NLP Models0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
BadSAD: Clean-Label Backdoor Attacks against Deep Semi-Supervised Anomaly Detection0
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
BadScan: An Architectural Backdoor Attack on Visual State Space Models0
BadSFL: Backdoor Attack against Scaffold Federated Learning0
EventTrojan: Manipulating Non-Intrusive Speech Quality Assessment via Imperceptible Events0
Show:102550
← PrevPage 27 of 53Next →

No leaderboard results yet.