SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 451492 of 492 papers

TitleStatusHype
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest0
FR-GAN: Fair and Robust Training0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Detection of Backdoors in Trained Classifiers Without Access to the Training Set0
On Defending Against Label Flipping Attacks on Malware Detection Systems0
Seeing is Not Believing: Camouflage Attacks on Image Scaling AlgorithmsCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Mixed Strategy Game Model Against Data Poisoning Attacks0
An Investigation of Data Poisoning Defenses for Online Learning0
Data Poisoning Attacks on Stochastic Bandits0
Robust Federated Training via Collaborative Machine Teaching using Trusted Instances0
Data Poisoning Attack against Knowledge Graph Embedding0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks0
Data Poisoning against Differentially-Private Learners: Attacks and Defenses0
SLSGD: Secure and Efficient Distributed On-device Machine Learning0
Online Data Poisoning Attack0
TrojDRL: Trojan Attacks on Deep Reinforcement Learning AgentsCode0
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks0
Spectrum Data Poisoning with Adversarial Deep Learning0
Reaching Data Confidentiality and Model Accountability on the CalTrain0
An Optimal Control View of Adversarial Machine Learning0
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks0
Stronger Data Poisoning Attacks Break Data Sanitization DefensesCode1
Spectral Signatures in Backdoor AttacksCode0
A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters0
Data Poisoning Attack against Unsupervised Node Embedding Methods0
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation0
Data Poisoning Attacks against Online Learning0
Data Poisoning Attacks in Contextual Bandits0
How To Backdoor Federated LearningCode1
Is feature selection secure against training data poisoning?0
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural NetworksCode1
Label Sanitization against Label Flipping Poisoning Attacks0
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe NoiseCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
Targeted Backdoor Attacks on Deep Learning Systems Using Data PoisoningCode0
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization0
Certified Defenses for Data Poisoning AttacksCode0
Data Poisoning Attacks on Factorization-Based Collaborative Filtering0
Show:102550
← PrevPage 10 of 10Next →

No leaderboard results yet.