SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 211220 of 523 papers

TitleStatusHype
Backdoor Attack on Vertical Federated Graph Neural Network Learning0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
BadHMP: Backdoor Attack against Human Motion Prediction0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
TrojVLM: Backdoor Attack Against Vision Language Models0
Weak-to-Strong Backdoor Attack for Large Language Models0
Claim-Guided Textual Backdoor Attack for Practical ApplicationsCode0
Show:102550
← PrevPage 22 of 53Next →

No leaderboard results yet.