SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 141150 of 492 papers

TitleStatusHype
Breaking Fair Binary Classification with Optimal Flipping Attacks0
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
An Optimal Control View of Adversarial Machine Learning0
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
An Investigation of Data Poisoning Defenses for Online Learning0
Show:102550
← PrevPage 15 of 50Next →

No leaderboard results yet.