SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 391400 of 523 papers

TitleStatusHype
BadSAD: Clean-Label Backdoor Attacks against Deep Semi-Supervised Anomaly Detection0
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
BadScan: An Architectural Backdoor Attack on Visual State Space Models0
BadSFL: Backdoor Attack against Scaffold Federated Learning0
EventTrojan: Manipulating Non-Intrusive Speech Quality Assessment via Imperceptible Events0
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
BadVFL: Backdoor Attacks in Vertical Federated Learning0
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization0
BATT: Backdoor Attack with Transformation-based Triggers0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Show:102550
← PrevPage 40 of 53Next →

No leaderboard results yet.