SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151175 of 492 papers

TitleStatusHype
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning0
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Detecting Backdoors in Deep Text Classifiers0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
Detection of Physiological Data Tampering Attacks with Quantum Machine Learning0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
An Investigation of Data Poisoning Defenses for Online Learning0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Show:102550
← PrevPage 7 of 20Next →

No leaderboard results yet.