SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 321330 of 492 papers

TitleStatusHype
UTrace: Poisoning Forensics for Private Collaborative Learning0
VPN: Verification of Poisoning in Neural Networks0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Histopathological Image Classification and Vulnerability Analysis using Federated Learning0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Show:102550
← PrevPage 33 of 50Next →

No leaderboard results yet.