SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 4150 of 523 papers

TitleStatusHype
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Multi-Target Federated Backdoor Attack Based on Feature Aggregation0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine UnlearningCode0
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning ModelsCode1
A Robust Attack: Displacement Backdoor Attack0
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting DataCode0
Show:102550
← PrevPage 5 of 53Next →

No leaderboard results yet.