SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 150 of 492 papers

TitleStatusHype
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
Data Shifts Hurt CoT: A Theoretical Study0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
VLMs Can Aggregate Scattered Training PatchesCode1
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Security Concerns for Large Language Models: A Survey0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
A Linear Approach to Data Poisoning0
BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution0
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?Code0
Towards Robust Spiking Neural Networks:Mitigating Heterogeneous Training Vulnerability via Dominant Eigencomponent Projection0
Sybil-based Virtual Data Poisoning Attacks in Federated Learning0
Stealthy LLM-Driven Data Poisoning Attacks Against Embedding-Based Retrieval-Augmented Recommender Systems0
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents0
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR ImagesCode0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
A Geometric Approach to Problems in Optimization and Data Science0
Investigating cybersecurity incidents using large language models in latest-generation wireless networks0
ControlNET: A Firewall for RAG-based LLM System0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning0
Data Poisoning in Deep Learning: A SurveyCode1
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing0
Optimizing ML Training with Metagradient Descent0
Policy Teaching via Data Poisoning in Learning from Human Preferences0
Targeted Data Poisoning for Black-Box Audio Datasets Ownership Verification0
Silent Branding Attack: Trigger-free Data Poisoning Attack on Text-to-Image Diffusion Models0
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models0
Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data0
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL0
Multi-Faceted Studies on Data Poisoning can Advance LLM DevelopmentCode0
A Robust Attack: Displacement Backdoor Attack0
Show:102550
← PrevPage 1 of 10Next →

No leaderboard results yet.