SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 3140 of 523 papers

TitleStatusHype
Backdoor Detection through Replicated Execution of Outsourced Training0
A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Towards Invisible Backdoor Attack on Text-to-Image Diffusion ModelCode0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
Show:102550
← PrevPage 4 of 53Next →

No leaderboard results yet.