SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 2650 of 523 papers

TitleStatusHype
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Strategic Planning of Stealthy Backdoor Attacks in Markov Decision Processes0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models0
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs0
Backdoor Detection through Replicated Execution of Outsourced Training0
A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Towards Invisible Backdoor Attack on Text-to-Image Diffusion ModelCode0
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks0
Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
BadRefSR: Backdoor Attacks Against Reference-based Image Super ResolutionCode0
Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion ModelsCode0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Multi-Target Federated Backdoor Attack Based on Feature Aggregation0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness0
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine UnlearningCode0
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning ModelsCode1
A Robust Attack: Displacement Backdoor Attack0
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting DataCode0
Show:102550
← PrevPage 2 of 21Next →

No leaderboard results yet.