SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 141150 of 523 papers

TitleStatusHype
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Backdoor Detection through Replicated Execution of Outsourced Training0
A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Towards Invisible Backdoor Attack on Text-to-Image Diffusion ModelCode0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
Show:102550
← PrevPage 15 of 53Next →

No leaderboard results yet.