SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 301350 of 492 papers

TitleStatusHype
Temporal Robustness against Data Poisoning0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Data Poisoning Attack against Knowledge Graph Embedding0
Towards Multi-Objective Statistically Fair Federated Learning0
Towards Poisoning Fair Representations0
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
TrojanTime: Backdoor Attacks on Time Series Classification0
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
Unlearnable Examples Detection via Iterative Filtering0
UTrace: Poisoning Forensics for Private Collaborative Learning0
VPN: Verification of Poisoning in Neural Networks0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Histopathological Image Classification and Vulnerability Analysis using Federated Learning0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
WW-FL: Secure and Private Large-Scale Federated Learning0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Interactive System-wise Anomaly Detection0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Investigating cybersecurity incidents using large language models in latest-generation wireless networks0
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks0
Is feature selection secure against training data poisoning?0
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks0
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System0
Label Sanitization against Label Flipping Poisoning Attacks0
From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security0
Show:102550
← PrevPage 7 of 10Next →

No leaderboard results yet.