SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 150 of 523 papers

TitleStatusHype
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation0
Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments0
ME: Trigger Element Combination Backdoor Attack on Copyright Infringement0
SPBA: Utilizing Speech Large Language Model for Backdoor Attacks on Speech Classification Models0
Single-Node Trigger Backdoor Attacks in Graph-Based Recommendation Systems0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Invisible Backdoor Triggers in Image Editing Model via Deep WatermarkingCode0
Heterogeneous Graph Backdoor Attack0
Poison in the Well: Feature Embedding Disruption in Backdoor Attacks0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization0
FIGhost: Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition0
Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning0
MixBridge: Heterogeneous Image-to-Image Backdoor Attack through Mixture of Schrödinger BridgesCode0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF FingerprintingCode0
Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion ModelsCode0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
Robo-Troj: Attacking LLM-based Task Planners0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Backdoor Detection through Replicated Execution of Outsourced Training0
A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Towards Invisible Backdoor Attack on Text-to-Image Diffusion ModelCode0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
BadRefSR: Backdoor Attacks Against Reference-based Image Super ResolutionCode0
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Multi-Target Federated Backdoor Attack Based on Feature Aggregation0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine UnlearningCode0
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning ModelsCode1
A Robust Attack: Displacement Backdoor Attack0
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting DataCode0
Show:102550
← PrevPage 1 of 11Next →

No leaderboard results yet.