SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 151160 of 523 papers

TitleStatusHype
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
BadRefSR: Backdoor Attacks Against Reference-based Image Super ResolutionCode0
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Multi-Target Federated Backdoor Attack Based on Feature Aggregation0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
Show:102550
← PrevPage 16 of 53Next →

No leaderboard results yet.