SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 2650 of 492 papers

TitleStatusHype
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents0
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR ImagesCode0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
A Geometric Approach to Problems in Optimization and Data Science0
Investigating cybersecurity incidents using large language models in latest-generation wireless networks0
ControlNET: A Firewall for RAG-based LLM System0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning0
Data Poisoning in Deep Learning: A SurveyCode1
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing0
Optimizing ML Training with Metagradient Descent0
Policy Teaching via Data Poisoning in Learning from Human Preferences0
Targeted Data Poisoning for Black-Box Audio Datasets Ownership Verification0
Silent Branding Attack: Trigger-free Data Poisoning Attack on Text-to-Image Diffusion Models0
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models0
Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data0
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL0
Multi-Faceted Studies on Data Poisoning can Advance LLM DevelopmentCode0
A Robust Attack: Displacement Backdoor Attack0
Show:102550
← PrevPage 2 of 20Next →

No leaderboard results yet.