SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 291300 of 492 papers

TitleStatusHype
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers0
Systematic Testing of the Data-Poisoning Robustness of KNN0
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation0
Targeted Data Poisoning for Black-Box Audio Datasets Ownership Verification0
A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning0
Temporal Robustness against Data Poisoning0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Data Poisoning Attack against Knowledge Graph Embedding0
Towards Multi-Objective Statistically Fair Federated Learning0
Show:102550
← PrevPage 30 of 50Next →

No leaderboard results yet.