SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 241250 of 523 papers

TitleStatusHype
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Label Poisoning is All You NeedCode1
CBD: A Certified Backdoor Detector Based on Local Dominant ProbabilityCode0
PoisonPrompt: Backdoor Attack on Prompt-based Large Language ModelsCode1
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks0
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective0
Invisible Threats: Backdoor Attack in OCR Systems0
Composite Backdoor Attacks Against Large Language ModelsCode1
Show:102550
← PrevPage 25 of 53Next →

No leaderboard results yet.