SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 321330 of 492 papers

TitleStatusHype
Poisoning Knowledge Graph Embeddings via Relation Inference PatternsCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Get a Model! Model Hijacking Attack Against Machine Learning Models0
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Mitigating Data Poisoning in Text Classification with Differential Privacy0
Availability Attacks Create ShortcutsCode1
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data PoisoningCode0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Protecting Proprietary Data: Poisoning for Secure Dataset Release0
Show:102550
← PrevPage 33 of 50Next →

No leaderboard results yet.