SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 461470 of 492 papers

TitleStatusHype
Data Poisoning Attacks on Stochastic Bandits0
Robust Federated Training via Collaborative Machine Teaching using Trusted Instances0
Data Poisoning Attack against Knowledge Graph Embedding0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks0
Data Poisoning against Differentially-Private Learners: Attacks and Defenses0
SLSGD: Secure and Efficient Distributed On-device Machine Learning0
Online Data Poisoning Attack0
TrojDRL: Trojan Attacks on Deep Reinforcement Learning AgentsCode0
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks0
Show:102550
← PrevPage 47 of 50Next →

No leaderboard results yet.