SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 381390 of 492 papers

TitleStatusHype
A Study of Backdoors in Instruction Fine-tuned Language Models0
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents0
Optimizing ML Training with Metagradient Descent0
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models0
OVLA: Neural Network Ownership Verification using Latent Watermarks0
PACOL: Poisoning Attacks Against Continual Learners0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks0
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks0
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning0
Show:102550
← PrevPage 39 of 50Next →

No leaderboard results yet.