SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 110 of 523 papers

TitleStatusHype
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation0
Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
ME: Trigger Element Combination Backdoor Attack on Copyright Infringement0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
SPBA: Utilizing Speech Large Language Model for Backdoor Attacks on Speech Classification Models0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Invisible Backdoor Triggers in Image Editing Model via Deep WatermarkingCode0
Heterogeneous Graph Backdoor Attack0
Show:102550
← PrevPage 1 of 53Next →

No leaderboard results yet.