SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 2130 of 523 papers

TitleStatusHype
Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion ModelsCode0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
Robo-Troj: Attacking LLM-based Task Planners0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Show:102550
← PrevPage 3 of 53Next →

No leaderboard results yet.