SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 151200 of 523 papers

TitleStatusHype
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
BadRefSR: Backdoor Attacks Against Reference-based Image Super ResolutionCode0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Multi-Target Federated Backdoor Attack Based on Feature Aggregation0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine UnlearningCode0
A Robust Attack: Displacement Backdoor Attack0
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting DataCode0
Scanning Trojaned Models Using Out-of-Distribution SamplesCode0
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement LearningCode0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on Retrieval-Augmented Diffusion Models0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
Energy Backdoor Attack to Deep Neural NetworksCode0
A4O: All Trigger for One sample0
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Stealthy Backdoor Attack to Real-world Models in Android Apps0
Injecting Bias into Text Classification Models using Backdoor Attacks0
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification0
BadSAD: Clean-Label Backdoor Attacks against Deep Semi-Supervised Anomaly Detection0
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion ModelsCode0
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features0
Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable TriggerCode0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
Data Free Backdoor AttacksCode0
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer0
LaserGuider: A Laser Based Physical Backdoor Attack against Deep Neural Networks0
PBP: Post-training Backdoor Purification for Malware ClassifiersCode0
Behavior Backdoor for Deep Learning Models0
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm0
Streamlined Federated Unlearning: Unite as One to Be Highly Efficient0
BadScan: An Architectural Backdoor Attack on Visual State Space Models0
BadSFL: Backdoor Attack against Scaffold Federated Learning0
LoBAM: LoRA-Based Backdoor Attack on Model Merging0
Memory Backdoor Attacks on Neural Networks0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
TrojanRobot: Physical-World Backdoor Attacks Against VLM-based Robotic Manipulation0
Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models0
Show:102550
← PrevPage 4 of 11Next →

No leaderboard results yet.