SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151200 of 492 papers

TitleStatusHype
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Blockchain-based Federated Recommendation with Incentive Mechanism0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
Filter, Obstruct and Dilute: Defending Against Backdoor Attacks on Semi-Supervised Learning0
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks0
Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application0
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Detecting Backdoors in Deep Text Classifiers0
Federated Unlearning0
Detection of Physiological Data Tampering Attacks with Quantum Machine Learning0
FLock: Defending Malicious Behaviors in Federated Learning with Blockchain0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
An Investigation of Data Poisoning Defenses for Online Learning0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
FR-GAN: Fair and Robust Training0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning0
Fed-Credit: Robust Federated Learning with Credibility Management0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
A Robust Attack: Displacement Backdoor Attack0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
Face Recognition in the age of CLIP & Billion image datasets0
Fairness-aware Summarization for Justified Decision-Making0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending against Backdoor Attack on Deep Neural Networks0
Show:102550
← PrevPage 4 of 10Next →

No leaderboard results yet.