SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 381390 of 523 papers

TitleStatusHype
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
TrojanRobot: Physical-World Backdoor Attacks Against VLM-based Robotic Manipulation0
TrojVLM: Backdoor Attack Against Vision Language Models0
Understanding Impacts of Task Similarity on Backdoor Attack and Detection0
Bidirectional Contrastive Split Learning for Visual Question Answering0
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning0
Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation0
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
Show:102550
← PrevPage 39 of 53Next →

No leaderboard results yet.