SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101110 of 523 papers

TitleStatusHype
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based AgentsCode3
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
BadHMP: Backdoor Attack against Human Motion Prediction0
TrojVLM: Backdoor Attack Against Vision Language Models0
Weak-to-Strong Backdoor Attack for Large Language Models0
Show:102550
← PrevPage 11 of 53Next →

No leaderboard results yet.