SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 301325 of 492 papers

TitleStatusHype
Temporal Robustness against Data Poisoning0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Data Poisoning Attack against Knowledge Graph Embedding0
Towards Multi-Objective Statistically Fair Federated Learning0
Towards Poisoning Fair Representations0
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
TrojanTime: Backdoor Attacks on Time Series Classification0
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
Unlearnable Examples Detection via Iterative Filtering0
UTrace: Poisoning Forensics for Private Collaborative Learning0
VPN: Verification of Poisoning in Neural Networks0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning0
Show:102550
← PrevPage 13 of 20Next →

No leaderboard results yet.