SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 351360 of 523 papers

TitleStatusHype
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Show:102550
← PrevPage 36 of 53Next →

No leaderboard results yet.