SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 110 of 523 papers

TitleStatusHype
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based AgentsCode3
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong DetectionCode2
BadChain: Backdoor Chain-of-Thought Prompting for Large Language ModelsCode2
Test-Time Backdoor Attacks on Multimodal Large Language ModelsCode2
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based AgentsCode2
Backdoor Learning: A SurveyCode2
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt LearningCode2
Backdoor Attack against Speaker VerificationCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Show:102550
← PrevPage 1 of 53Next →

No leaderboard results yet.