SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 126150 of 523 papers

TitleStatusHype
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Backdoor Attack and Defense for Deep Regression0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Effective backdoor attack on graph neural networks in link prediction tasks0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Show:102550
← PrevPage 6 of 21Next →

No leaderboard results yet.