SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 51100 of 523 papers

TitleStatusHype
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Exploring Backdoor Vulnerabilities of Chat ModelsCode1
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Backdoor Attack against Speaker VerificationCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
FedDefender: Backdoor Attack Defense in Federated LearningCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
FIBA: Frequency-Injection based Backdoor Attack in Medical Image AnalysisCode1
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
Generating Potent Poisons and Backdoors from Scratch with Guided DiffusionCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Graph BackdoorCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Backdoor Attacks Against Dataset DistillationCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Influencer Backdoor Attack on Semantic SegmentationCode1
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
Backdoor Attacks on Crowd CountingCode1
Invisible Backdoor Attack against Self-supervised LearningCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attacks on Self-Supervised LearningCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Backdoor Attacks to Graph Neural NetworksCode1
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning ModelsCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
CL-Attack: Textual Backdoor Attacks via Cross-Lingual TriggersCode1
Mask-based Invisible Backdoor Attacks on Object DetectionCode1
Hidden Trigger Backdoor AttacksCode1
Backdoor Attack with Sparse and Invisible TriggerCode1
Composite Backdoor Attacks Against Large Language ModelsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-PartitioningCode1
Backdoor Defense via Deconfounded Representation LearningCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransfomersCode1
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Defending Against Backdoor Attacks in Natural Language GenerationCode1
PoisonPrompt: Backdoor Attack on Prompt-based Large Language ModelsCode1
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free CasesCode1
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
Show:102550
← PrevPage 2 of 11Next →

No leaderboard results yet.