SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 271280 of 523 papers

TitleStatusHype
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
BadVFL: Backdoor Attacks in Vertical Federated Learning0
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization0
BATT: Backdoor Attack with Transformation-based Triggers0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape0
Behavior Backdoor for Deep Learning Models0
Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning0
BFClass: A Backdoor-free Text Classification Framework0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Show:102550
← PrevPage 28 of 53Next →

No leaderboard results yet.