SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301310 of 523 papers

TitleStatusHype
Multi-Target Federated Backdoor Attack Based on Feature Aggregation0
Natural Backdoor Attack on Text Data0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
Neighboring Backdoor Attacks on Graph Convolutional Network0
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations0
Object-oriented backdoor attack against image captioning0
On Certifying Robustness against Backdoor Attacks via Randomized Smoothing0
On Feasibility of Server-side Backdoor Attacks on Split Learning0
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning0
Show:102550
← PrevPage 31 of 53Next →

No leaderboard results yet.