SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 91100 of 492 papers

TitleStatusHype
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Attacks on the neural network and defense methods0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Towards Robust Spiking Neural Networks:Mitigating Heterogeneous Training Vulnerability via Dominant Eigencomponent Projection0
Model Hijacking Attack in Federated Learning0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Show:102550
← PrevPage 10 of 50Next →

No leaderboard results yet.