SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 411420 of 492 papers

TitleStatusHype
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Dimensionality reduction, regularization, and generalization in overparameterized regressionsCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Show:102550
← PrevPage 42 of 50Next →

No leaderboard results yet.