SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 351400 of 523 papers

TitleStatusHype
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
Backdoor Attack in the Physical World0
Backdoor Attack on Multilingual Machine Translation0
Backdoor Attack on Vertical Federated Graph Neural Network Learning0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger0
Backdoor Attacks against Image-to-Image Networks0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions0
Backdoor Attacks in Peer-to-Peer Federated Learning0
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution0
Backdoor Attacks on the DNN Interpretation System0
Backdoor Attacks with Input-unique Triggers in NLP0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
Backdoor Attack with Imperceptible Input and Latent Modification0
Backdoor Attack with Mode Mixture Latent Modification0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning0
Backdoor Detection through Replicated Execution of Outsourced Training0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers0
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations0
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
BadHMP: Backdoor Attack against Human Motion Prediction0
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
BadNL: Backdoor Attacks Against NLP Models0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
BadSAD: Clean-Label Backdoor Attacks against Deep Semi-Supervised Anomaly Detection0
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
BadScan: An Architectural Backdoor Attack on Visual State Space Models0
BadSFL: Backdoor Attack against Scaffold Federated Learning0
EventTrojan: Manipulating Non-Intrusive Speech Quality Assessment via Imperceptible Events0
BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors0
BadVFL: Backdoor Attacks in Vertical Federated Learning0
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization0
BATT: Backdoor Attack with Transformation-based Triggers0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Show:102550
← PrevPage 8 of 11Next →

No leaderboard results yet.