SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 401425 of 492 papers

TitleStatusHype
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Mitigating the Impact of Adversarial Attacks in Very Deep Networks0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Dimensionality reduction, regularization, and generalization in overparameterized regressionsCode0
Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems0
A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
Concealed Data Poisoning Attacks on NLP Models0
VenoMave: Targeted Poisoning Against Speech RecognitionCode0
GFL: A Decentralized Federated Learning Framework Based On Blockchain0
Sniper GMMs: Structured Gaussian mixtures poison ML on large n small p data with high efficacy0
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
Defending Regression Learners Against Poisoning AttacksCode0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
Practical Poisoning Attacks on Neural Networks0
Odyssey: Creation, Analysis and Detection of Trojan ModelsCode0
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Show:102550
← PrevPage 17 of 20Next →

No leaderboard results yet.