SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 126150 of 492 papers

TitleStatusHype
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm0
A Robust Attack: Displacement Backdoor Attack0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Balancing Privacy, Robustness, and Efficiency in Machine Learning0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
An Optimal Control View of Adversarial Machine Learning0
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
An Investigation of Data Poisoning Defenses for Online Learning0
Show:102550
← PrevPage 6 of 20Next →

No leaderboard results yet.