SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 131140 of 492 papers

TitleStatusHype
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Balancing Privacy, Robustness, and Efficiency in Machine Learning0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data0
Show:102550
← PrevPage 14 of 50Next →

No leaderboard results yet.