SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 441450 of 492 papers

TitleStatusHype
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
Lethal Dose Conjecture on Data PoisoningCode0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe NoiseCode0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Progressive Poisoned Data Isolation for Training-time Backdoor DefenseCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Show:102550
← PrevPage 45 of 50Next →

No leaderboard results yet.