SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 301325 of 492 papers

TitleStatusHype
Towards Poisoning Fair Representations0
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
TrojanTime: Backdoor Attacks on Time Series Classification0
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
Unlearnable Examples Detection via Iterative Filtering0
UTrace: Poisoning Forensics for Private Collaborative Learning0
VPN: Verification of Poisoning in Neural Networks0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Model Hijacking Attack in Federated Learning0
Show:102550
← PrevPage 13 of 20Next →

No leaderboard results yet.