SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 181190 of 523 papers

TitleStatusHype
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Backdoor Attack with Imperceptible Input and Latent Modification0
Deep Learning Backdoors0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
Show:102550
← PrevPage 19 of 53Next →

No leaderboard results yet.