SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 291300 of 523 papers

TitleStatusHype
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
Backdoor Attack with Sparse and Invisible TriggerCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
Defending against Insertion-based Textual Backdoor Attacks via AttributionCode0
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Show:102550
← PrevPage 30 of 53Next →

No leaderboard results yet.