SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 521523 of 523 papers

TitleStatusHype
Defending against Insertion-based Textual Backdoor Attacks via AttributionCode0
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion ModelsCode0
Defending Against Backdoor Attacks by Layer-wise Feature AnalysisCode0
Show:102550
← PrevPage 53 of 53Next →

No leaderboard results yet.