SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 125 of 523 papers

TitleStatusHype
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based AgentsCode3
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
Backdoor Learning: A SurveyCode2
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based AgentsCode2
Test-Time Backdoor Attacks on Multimodal Large Language ModelsCode2
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong DetectionCode2
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt LearningCode2
BadChain: Backdoor Chain-of-Thought Prompting for Large Language ModelsCode2
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
Backdoor Defense via Deconfounded Representation LearningCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
BadEdit: Backdooring large language models by model editingCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
BadMerging: Backdoor Attacks Against Model MergingCode1
Backdoor Attacks on Self-Supervised LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attacks to Graph Neural NetworksCode1
Backdoor Attacks on Crowd CountingCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Backdoor Attack against Speaker VerificationCode1
A new Backdoor Attack in CNNs by training set corruption without label poisoningCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Show:102550
← PrevPage 1 of 21Next →

No leaderboard results yet.