SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 511520 of 523 papers

TitleStatusHype
Stealthy Backdoors as Compression ArtifactsCode0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation ModelsCode0
Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable TriggerCode0
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF FingerprintingCode0
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated LearningCode0
SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic TransferCode0
BagFlip: A Certified Defense against Data PoisoningCode0
Defending Neural Backdoors via Generative Distribution ModelingCode0
Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement LearningCode0
Show:102550
← PrevPage 52 of 53Next →

No leaderboard results yet.