SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 371380 of 523 papers

TitleStatusHype
Backdoor Detection through Replicated Execution of Outsourced Training0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers0
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations0
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Show:102550
← PrevPage 38 of 53Next →

No leaderboard results yet.