SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 471480 of 492 papers

TitleStatusHype
Online Data Poisoning Attack0
TrojDRL: Trojan Attacks on Deep Reinforcement Learning AgentsCode0
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks0
Spectrum Data Poisoning with Adversarial Deep Learning0
Reaching Data Confidentiality and Model Accountability on the CalTrain0
An Optimal Control View of Adversarial Machine Learning0
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks0
Spectral Signatures in Backdoor AttacksCode0
A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters0
Data Poisoning Attack against Unsupervised Node Embedding Methods0
Show:102550
← PrevPage 48 of 50Next →

No leaderboard results yet.