SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 326350 of 492 papers

TitleStatusHype
BagFlip: A Certified Defense against Data PoisoningCode0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning0
Federated Multi-Armed Bandits Under Byzantine Attacks0
VPN: Verification of Poisoning in Neural Networks0
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning0
GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Robustly-reliable learners under poisoning attacks0
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation0
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Collaborative Self Organizing Map with DeepNNs for Fake Task Prevention in Mobile Crowdsensing0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Redactor: A Data-centric and Individualized Defense Against Inference Attacks0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Towards Multi-Objective Statistically Fair Federated Learning0
How to Backdoor HyperNetwork in Personalized Federated Learning?0
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessCode0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning0
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks0
Show:102550
← PrevPage 14 of 20Next →

No leaderboard results yet.