SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 326350 of 492 papers

TitleStatusHype
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Histopathological Image Classification and Vulnerability Analysis using Federated Learning0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
WW-FL: Secure and Private Large-Scale Federated Learning0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Interactive System-wise Anomaly Detection0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Investigating cybersecurity incidents using large language models in latest-generation wireless networks0
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks0
Is feature selection secure against training data poisoning?0
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks0
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System0
Label Sanitization against Label Flipping Poisoning Attacks0
From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security0
Show:102550
← PrevPage 14 of 20Next →

No leaderboard results yet.