SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451500 of 523 papers

TitleStatusHype
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Dynamic Backdoor Attacks Against Machine Learning Models0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
Federated Learning with Flexible Architectures0
FIGhost: Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition0
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras0
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair0
FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on Federated Learning0
GENIE: Watermarking Graph Neural Networks for Link Prediction0
GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to Pre-trained Encoders in Self-supervised Learning0
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models0
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering0
Handcrafted Backdoors in Deep Neural Networks0
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios0
Heterogeneous Graph Backdoor Attack0
Hidden Backdoor Attack against Deep Learning-Based Wireless Signal Modulation Classifiers0
Hidden Backdoor Attack against Semantic Segmentation Models0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack0
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Inferring Properties of Graph Neural Networks0
Injecting Bias into Text Classification Models using Backdoor Attacks0
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
Invisible Backdoor Attack Through Singular Value Decomposition0
Invisible Threats: Backdoor Attack in OCR Systems0
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?0
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection0
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
Krait: A Backdoor Attack Against Graph Prompt Tuning0
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm0
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
Show:102550
← PrevPage 10 of 11Next →

No leaderboard results yet.