SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 261270 of 523 papers

TitleStatusHype
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Show:102550
← PrevPage 27 of 53Next →

No leaderboard results yet.