SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 341350 of 492 papers

TitleStatusHype
Interactive System-wise Anomaly Detection0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Investigating cybersecurity incidents using large language models in latest-generation wireless networks0
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks0
Is feature selection secure against training data poisoning?0
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks0
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System0
Label Sanitization against Label Flipping Poisoning Attacks0
From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security0
Show:102550
← PrevPage 35 of 50Next →

No leaderboard results yet.