SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 4150 of 492 papers

TitleStatusHype
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Generative Poisoning Using Random DiscriminatorsCode1
Adversarial Examples Make Strong PoisonsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Show:102550
← PrevPage 5 of 50Next →

No leaderboard results yet.