SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 121130 of 523 papers

TitleStatusHype
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Invisible Backdoor Triggers in Image Editing Model via Deep WatermarkingCode0
Heterogeneous Graph Backdoor Attack0
Poison in the Well: Feature Embedding Disruption in Backdoor Attacks0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization0
FIGhost: Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition0
Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning0
Show:102550
← PrevPage 13 of 53Next →

No leaderboard results yet.